Abstract
Machine learning (ML) models, demonstrably powerful, suffer from a lack of interpretability. The absence of transparency, often referred to as the black box nature of ML models, undermines trust and urges the need for efforts to enhance their explainability. Explainable AI (XAI) techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes of these complex models. Techniques like Counterfactual Explanations (CF) and Feature Importance play a crucial role in achieving this goal. Furthermore, high-quality and diverse data remains the foundational element for robust and trustworthy ML applications. In many applications, the data used to train ML and XAI explainers contain sensitive information. In this context, numerous privacy-preserving techniques can be employed to safeguard sensitive information in the data, such as differential privacy. Subsequently, a conflict between XAI and privacy solutions emerges due to their opposing goals. Since XAI techniques provide reasoning for the model behavior, they reveal information relative to ML models, such as their decision boundaries, the values of features, or the gradients of deep learning models when explanations are exposed to a third entity. Attackers can initiate privacy breaching attacks using these explanations, to perform model extraction, inference, and membership attacks. This dilemma underscores the challenge of finding the right equilibrium between understanding ML decision-making and safeguarding privacy.
Credit Attribution and Stable Compression
Authors: Roi Livni, Shay Moran, Kobbi Nissim, Chirag Pabbaraju
Abstract
Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators. We study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of $k$ datapoints. These $k$ datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output. Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm). We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research.
Privacy Preserving Machine Learning for Electronic Health Records using Federated Learning and Differential Privacy
Abstract
An Electronic Health Record (EHR) is an electronic database used by healthcare providers to store patients' medical records which may include diagnoses, treatments, costs, and other personal information. Machine learning (ML) algorithms can be used to extract and analyze patient data to improve patient care. Patient records contain highly sensitive information, such as social security numbers (SSNs) and residential addresses, which introduces a need to apply privacy-preserving techniques for these ML models using federated learning and differential privacy.
On Computing Pairwise Statistics with Local Differential Privacy
Authors: Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Adam Sealfon
Subjects: Subjects:
Data Structures and Algorithms (cs.DS); Cryptography and Security (cs.CR)
Abstract
We study the problem of computing pairwise statistics, i.e., ones of the form $\binom{n}{2}^{-1} \sum_{i \ne j} f(x_i, x_j)$, where $x_i$ denotes the input to the $i$th user, with differential privacy (DP) in the local model. This formulation captures important metrics such as Kendall's $\tau$ coefficient, Area Under Curve, Gini's mean difference, Gini's entropy, etc. We give several novel and generic algorithms for the problem, leveraging techniques from DP algorithms for linear queries.
Keyword: privacy
Artificial Intelligence, VR, AR and Metaverse Technologies for Human Resources Management
Abstract
Human Resources (HR) technology solutions encompass software and hardware tools designed to automate HR processes, gather, process, and analyze data, utilize it for strategic decision-making, and execute HR professionals' tasks while prioritizing security and privacy considerations. As with numerous other domains, Digital Transformation and emerging technologies have commenced integration into HR processes. These technologies are utilized by HR professionals and various stakeholders involved in HR operations. This study evaluates the utilization of Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (VR), and the Metaverse within HR management, focusing on current trends and potential opportunities. A survey was conducted to gauge HR professionals' perceptions and critiques regarding these technologies. Participants were the HR department officers, academicians who specialized in HR and staff who had courses at diverse levels about HR. The acquired results were subjected to comparative analysis within this article.
An Exploratory Study on Human-Centric Video Anomaly Detection through Variational Autoencoders and Trajectory Prediction
Abstract
Video Anomaly Detection (VAD) represents a challenging and prominent research task within computer vision. In recent years, Pose-based Video Anomaly Detection (PAD) has drawn considerable attention from the research community due to several inherent advantages over pixel-based approaches despite the occasional suboptimal performance. Specifically, PAD is characterized by reduced computational complexity, intrinsic privacy preservation, and the mitigation of concerns related to discrimination and bias against specific demographic groups. This paper introduces TSGAD, a novel human-centric Two-Stream Graph-Improved Anomaly Detection leveraging Variational Autoencoders (VAEs) and trajectory prediction. TSGAD aims to explore the possibility of utilizing VAEs as a new approach for pose-based human-centric VAD alongside the benefits of trajectory prediction. We demonstrate TSGAD's effectiveness through comprehensive experimentation on benchmark datasets. TSGAD demonstrates comparable results with state-of-the-art methods showcasing the potential of adopting variational autoencoders. This suggests a promising direction for future research endeavors. The code base for this work is available at this https URL.
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Authors: Prashanth Vijayaraghavan, Hongzhi Wang, Luyao Shi, Tyler Baldwin, David Beymer, Ehsan Degan
Subjects: Subjects:
Computation and Language (cs.CL)
Abstract
Recently, there has been a growing availability of pre-trained text models on various model repositories. These models greatly reduce the cost of training new models from scratch as they can be fine-tuned for specific tasks or trained on large datasets. However, these datasets may not be publicly accessible due to the privacy, security, or intellectual property issues. In this paper, we aim to develop a lightweight student network that can learn from multiple teacher models without accessing their original training data. Hence, we investigate Data-Free Knowledge Amalgamation (DFKA), a knowledge-transfer task that combines insights from multiple pre-trained teacher models and transfers them effectively to a compact student network. To accomplish this, we propose STRATANET, a modeling framework comprising: (a) a steerable data generator that produces text data tailored to each teacher and (b) an amalgamation module that implements a self-regulative strategy using confidence estimates from the teachers' different layers to selectively integrate their knowledge and train a versatile student. We evaluate our method on three benchmark text classification datasets with varying labels or domains. Empirically, we demonstrate that the student model learned using our STRATANET outperforms several baselines significantly under data-driven and data-free constraints.
Blockchain for Academic Integrity: Developing the Blockchain Academic Credential Interoperability Protocol (BACIP)
Authors: Juan A. Berrios Moya
Subjects: Subjects:
Cryptography and Security (cs.CR); Computers and Society (cs.CY)
Abstract
This research introduces the Blockchain Academic Credential Interoperability Protocol (BACIP), designed to significantly enhance the security, privacy, and interoperability of verifying academic credentials globally, addressing the widespread issue of academic fraud. BACIP integrates dual blockchain architecture, smart contracts, and zero-knowledge proofs to offer a scalable and transparent framework aimed at reducing fraud and improving the mobility and opportunities for students and professionals worldwide. The research methodology adopts a mixed-methods approach, involving a rigorous review of pertinent literature and systematic integration of advanced technological components. This includes both qualitative and quantitative analyses that underpin the development of a universally compatible system. Preliminary evaluations suggest that BACIP could enhance verification efficiency and bolster security against tampering and unauthorized access. While the theoretical framework and practical implementations have laid a solid foundation, the protocol's real-world efficacy awaits empirical validation in a production environment. Future research will focus on deploying a prototype, establishing robust validation policies, and defining precise testing parameters. This critical phase is indispensable for a thorough assessment of BACIP's operational robustness and its compliance with international educational standards. This work contributes significantly to the academic field by proposing a robust model for managing and safeguarding academic credentials, thus laying a strong foundation for further innovation in credential verification using blockchain technology.
DataFreeShield: Defending Adversarial Attacks without Training Data
Authors: Hyeyoon Lee, Kanghyun Choi, Dain Kwon, Sunjong Park, Mayoore Selvarasa Jaiswal, Noseong Park, Jonghyun Choi, Jinho Lee
Subjects: Subjects:
Machine Learning (cs.LG); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recent advances in adversarial robustness rely on an abundant set of training data, where using external or additional datasets has become a common setting. However, in real life, the training data is often kept private for security and privacy issues, while only the pretrained weight is available to the public. In such scenarios, existing methods that assume accessibility to the original data become inapplicable. Thus we investigate the pivotal problem of data-free adversarial robustness, where we try to achieve adversarial robustness without accessing any real data. Through a preliminary study, we highlight the severity of the problem by showing that robustness without the original dataset is difficult to achieve, even with similar domain datasets. To address this issue, we propose DataFreeShield, which tackles the problem from two perspectives: surrogate dataset generation and adversarial training using the generated data. Through extensive validation, we show that DataFreeShield outperforms baselines, demonstrating that the proposed method sets the first entirely data-free solution for the adversarial robustness problem.
ProBE: Proportioning Privacy Budget for Complex Exploratory Decision Support
Authors: Nada Lahjouji, Sameera Ghayyur, Xi He, Sharad Mehrotra
Abstract
This paper studies privacy in the context of complex decision support queries composed of multiple conditions on different aggregate statistics combined using disjunction and conjunction operators. Utility requirements for such queries necessitate the need for private mechanisms that guarantee a bound on the false negative and false positive errors. This paper formally defines complex decision support queries and their accuracy requirements, and provides algorithms that proportion the existing budget to optimally minimize privacy loss while supporting a bounded guarantee on the accuracy. Our experimental results on multiple real-life datasets show that our algorithms successfully maintain such utility guarantees, while also minimizing privacy loss.
Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in Federated Learning
Authors: Zhibo Wang, Zhiwei Chang, Jiahui Hu, Xiaoyi Pang, Jiacheng Du, Yongle Chen, Kui Ren
Subjects: Subjects:
Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI)
Abstract
Federated Learning (FL) exhibits privacy vulnerabilities under gradient inversion attacks (GIAs), which can extract private information from individual gradients. To enhance privacy, FL incorporates Secure Aggregation (SA) to prevent the server from obtaining individual gradients, thus effectively resisting GIAs. In this paper, we propose a stealthy label inference attack to bypass SA and recover individual clients' private labels. Specifically, we conduct a theoretical analysis of label inference from the aggregated gradients that are exclusively obtained after implementing SA. The analysis results reveal that the inputs (embeddings) and outputs (logits) of the final fully connected layer (FCL) contribute to gradient disaggregation and label restoration. To preset the embeddings and logits of FCL, we craft a fishing model by solely modifying the parameters of a single batch normalization (BN) layer in the original model. Distributing client-specific fishing models, the server can derive the individual gradients regarding the bias of FCL by resolving a linear system with expected embeddings and the aggregated gradients as coefficients. Then the labels of each client can be precisely computed based on preset logits and gradients of FCL's bias. Extensive experiments show that our attack achieves large-scale label recovery with 100\% accuracy on various datasets and model architectures.
EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
Abstract
Efficient adaption of large language models (LLMs) on edge devices is essential for applications requiring continuous and privacy-preserving adaptation and inference. However, existing tuning techniques fall short because of the high computation and memory overheads. To this end, we introduce a computation- and memory-efficient LLM tuning framework, called Edge-LLM, to facilitate affordable and effective LLM adaptation on edge devices. Specifically, Edge-LLM features three core components: (1) a layer-wise unified compression (LUC) technique to reduce the computation overhead by generating layer-wise pruning sparsity and quantization bit-width policies, (2) an adaptive layer tuning and voting scheme to reduce the memory overhead by reducing the backpropagation depth, and (3) a complementary hardware scheduling strategy to handle the irregular computation patterns introduced by LUC and adaptive layer tuning, thereby achieving efficient computation and data movements. Extensive experiments demonstrate that Edge-LLM achieves a 2.92x speed up and a 4x memory overhead reduction as compared to vanilla tuning methods with comparable task accuracy. Our code is available at this https URL
Data Issues in Industrial AI System: A Meta-Review and Research Strategy
Authors: Xuejiao Li, Cheng Yang, Charles Møller, Jay Lee
Abstract
In the era of Industry 4.0, artificial intelligence (AI) is assuming an increasingly pivotal role within industrial systems. Despite the recent trend within various industries to adopt AI, the actual adoption of AI is not as developed as perceived. A significant factor contributing to this lag is the data issues in AI implementation. How to address these data issues stands as a significant concern confronting both industry and academia. To address data issues, the first step involves mapping out these issues. Therefore, this study conducts a meta-review to explore data issues and methods within the implementation of industrial AI. Seventy-two data issues are identified and categorized into various stages of the data lifecycle, including data source and collection, data access and storage, data integration and interoperation, data pre-processing, data processing, data security and privacy, and AI technology adoption. Subsequently, the study analyzes the data requirements of various AI algorithms. Building on the aforementioned analyses, it proposes a data management framework, addressing how data issues can be systematically resolved at every stage of the data lifecycle. Finally, the study highlights future research directions. In doing so, this study enriches the existing body of knowledge and provides guidelines for professionals navigating the complex landscape of achieving data usability and usefulness in industrial AI.
Privacy Implications of Explainable AI in Data-Driven Systems
Abstract
Machine learning (ML) models, demonstrably powerful, suffer from a lack of interpretability. The absence of transparency, often referred to as the black box nature of ML models, undermines trust and urges the need for efforts to enhance their explainability. Explainable AI (XAI) techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes of these complex models. Techniques like Counterfactual Explanations (CF) and Feature Importance play a crucial role in achieving this goal. Furthermore, high-quality and diverse data remains the foundational element for robust and trustworthy ML applications. In many applications, the data used to train ML and XAI explainers contain sensitive information. In this context, numerous privacy-preserving techniques can be employed to safeguard sensitive information in the data, such as differential privacy. Subsequently, a conflict between XAI and privacy solutions emerges due to their opposing goals. Since XAI techniques provide reasoning for the model behavior, they reveal information relative to ML models, such as their decision boundaries, the values of features, or the gradients of deep learning models when explanations are exposed to a third entity. Attackers can initiate privacy breaching attacks using these explanations, to perform model extraction, inference, and membership attacks. This dilemma underscores the challenge of finding the right equilibrium between understanding ML decision-making and safeguarding privacy.
Rethinking Entity-level Unlearning for Large Language Models
Authors: Weitao Ma, Xiaocheng Feng, Weihong Zhong, Lei Huang, Yangfan Ye, Bing Qin
Subjects: Subjects:
Computation and Language (cs.CL)
Abstract
Large language model unlearning has gained increasing attention due to its potential to mitigate security and privacy concerns. Current research predominantly focuses on Instance-level unlearning, specifically aiming at forgetting predefined instances of sensitive content. However, a notable gap still exists in exploring the deletion of complete entity-related information, which is crucial in many real-world scenarios, such as copyright protection. To this end, we propose a novel task of Entity-level unlearning, where the entity-related knowledge within the target model is supposed to be entirely erased. Given the challenge of practically accessing all entity-related knowledge within a model, we begin by simulating entity-level unlearning scenarios through fine-tuning models to introduce pseudo entities. Following this, we develop baseline methods inspired by trending unlearning techniques and conduct a detailed comparison of their effectiveness in this task. Extensive experiments reveal that current unlearning algorithms struggle to achieve effective entity-level unlearning. Additionally, our analyses further indicate that entity-related knowledge injected through fine-tuning is more susceptible than original entities from pre-training during unlearning, highlighting the necessity for more thorough pseudo-entity injection methods to make them closer to pre-trained knowledge.
Abstract
To realize ubiquitous intelligence of future vehicular networks, artificial intelligence (AI) is critical since it can mine knowledge from vehicular data to improve the quality of many AI driven vehicular services. By combining AI techniques with vehicular networks, Vehicular Edge Intelligence (VEI) can utilize the computing, storage, and communication resources of vehicles to train the AI models. Nevertheless, when executing the model training, the traditional centralized learning paradigm requires vehicles to upload their raw data to a central server, which results in significant communication overheads and the risk of privacy leakage. In this article, we first overview the system architectures, performance metrics and challenges ahead of VEI design. Then we propose to utilize distribute machine learning scheme, namely split federated learning (SFL), to boost the development of VEI. We present a novel adaptive and parellel SFL scheme and conduct corresponding analysis on its performance. Future research directions are highlighted to shed light on the efficient design of SFL.
Privacy Requirements and Realities of Digital Public Goods
Abstract
In the international development community, the term "digital public goods" is used to describe open-source digital products (e.g., software, datasets) that aim to address the United Nations (UN) Sustainable Development Goals. DPGs are increasingly being used to deliver government services around the world (e.g., ID management, healthcare registration). Because DPGs may handle sensitive data, the UN has established user privacy as a first-order requirement for DPGs. The privacy risks of DPGs are currently managed in part by the DPG standard, which includes a prerequisite questionnaire with questions designed to evaluate a DPG's privacy posture. This study examines the effectiveness of the current DPG standard for ensuring adequate privacy protections. We present a systematic assessment of responses from DPGs regarding their protections of users' privacy. We also present in-depth case studies from three widely-used DPGs to identify privacy threats and compare this to their responses to the DPG standard. Our findings reveal limitations in the current DPG standard's evaluation approach. We conclude by presenting preliminary recommendations and suggestions for strengthening the DPG standard as it relates to privacy. Additionally, we hope this study encourages more usable privacy research on communicating privacy, not only to end users but also third-party adopters of user-facing technologies.
Credit Attribution and Stable Compression
Authors: Roi Livni, Shay Moran, Kobbi Nissim, Chirag Pabbaraju
Abstract
Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators. We study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of $k$ datapoints. These $k$ datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output. Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm). We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research.
Privacy Preserving Machine Learning for Electronic Health Records using Federated Learning and Differential Privacy
Abstract
An Electronic Health Record (EHR) is an electronic database used by healthcare providers to store patients' medical records which may include diagnoses, treatments, costs, and other personal information. Machine learning (ML) algorithms can be used to extract and analyze patient data to improve patient care. Patient records contain highly sensitive information, such as social security numbers (SSNs) and residential addresses, which introduces a need to apply privacy-preserving techniques for these ML models using federated learning and differential privacy.
Meta-FL: A Novel Meta-Learning Framework for Optimizing Heterogeneous Model Aggregation in Federated Learning
Authors: Zahir Alsulaimawi
Subjects: Subjects:
Machine Learning (cs.LG); Cryptography and Security (cs.CR)
Abstract
Federated Learning (FL) enables collaborative model training across diverse entities while safeguarding data privacy. However, FL faces challenges such as data heterogeneity and model diversity. The Meta-Federated Learning (Meta-FL) framework has been introduced to tackle these challenges. Meta-FL employs an optimization-based Meta-Aggregator to navigate the complexities of heterogeneous model updates. The Meta-Aggregator enhances the global model's performance by leveraging meta-features, ensuring a tailored aggregation that accounts for each local model's accuracy. Empirical evaluation across four healthcare-related datasets demonstrates the Meta-FL framework's adaptability, efficiency, scalability, and robustness, outperforming conventional FL approaches. Furthermore, Meta-FL's remarkable efficiency and scalability are evident in its achievement of superior accuracy with fewer communication rounds and its capacity to manage expanding federated networks without compromising performance.
Crepe: A Mobile Screen Data Collector Using Graph Query
Authors: Yuwen Lu, Meng Chen, Qi Zhao, Victor Cox, Yang Yang, Meng Jiang, Jay Brockman, Tamara Kay, Toby Jia-Jun Li
Abstract
Collecting mobile datasets remains challenging for academic researchers due to limited data access and technical barriers. Commercial organizations often possess exclusive access to mobile data, leading to a "data monopoly" that restricts the independence of academic research. Existing open-source mobile data collection frameworks primarily focus on mobile sensing data rather than screen content, which is crucial for various research studies. We present Crepe, a no-code Android app that enables researchers to collect information displayed on screen through simple demonstrations of target data. Crepe utilizes a novel Graph Query technique which augments the structures of mobile UI screens to support flexible identification, location, and collection of specific data pieces. The tool emphasizes participants' privacy and agency by providing full transparency over collected data and allowing easy opt-out. We designed and built Crepe for research purposes only and in scenarios where researchers obtain explicit consent from participants. Code for Crepe will be open-sourced to support future academic research data collection.
Privacy-Preserving and Trustworthy Localization in an IoT Environment
Authors: Guglielmo Zocca, Omar Hasan
Subjects: Subjects:
Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
The Internet of Things (IoT) is increasingly prevalent in various applications, such as healthcare and logistics. One significant service of IoT technologies that is essential for these applications is localization. The goal of this service is to determine the precise position of a specific target. The localization data often needs to be private, accessible only to specific entities, and must maintain authenticity and integrity to ensure trustworthiness. IoT technology has evolved significantly, with Ultra-Wide Band (UWB) technology enhancing localization speed and precision. However, IoT device security remains a concern, as devices can be compromised or act maliciously. Furthermore, localization data is typically stored centrally, which can also be a point of vulnerability. Our approach leverages the features of a permissioned blockchain, specifically Hyperledger Fabric, to address these challenges. Hyperledger Fabric's collection feature ensures data privacy, and its smart contracts (chaincode) enhance trustworthiness. We tested our solution using a network of devices known as CLOVES, demonstrating robust performance characteristics with UWB technology. Additionally, we evaluated our approach through an indoor localization use case.
On Computing Pairwise Statistics with Local Differential Privacy
Authors: Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Adam Sealfon
Subjects: Subjects:
Data Structures and Algorithms (cs.DS); Cryptography and Security (cs.CR)
Abstract
We study the problem of computing pairwise statistics, i.e., ones of the form $\binom{n}{2}^{-1} \sum_{i \ne j} f(x_i, x_j)$, where $x_i$ denotes the input to the $i$th user, with differential privacy (DP) in the local model. This formulation captures important metrics such as Kendall's $\tau$ coefficient, Area Under Curve, Gini's mean difference, Gini's entropy, etc. We give several novel and generic algorithms for the problem, leveraging techniques from DP algorithms for linear queries.
Thinking Inside The Box: Privacy Against Stronger Adversaries
Authors: Eldon Chung
Subjects: Subjects:
Cryptography and Security (cs.CR)
Abstract
In this thesis, we study extensions of statistical cryptographic primitives. In particular we study leakage-resilient secret sharing, non-malleable extractors, and immunized ideal one-way functions. The thesis is divided into three main chapters. In the first chapter, we show that 2-out-of-2 leakage resilient (and also non-malleable) secret sharing requires randomness sources that are also extractable. This rules out the possibility of using min-entropic sources. In the second, we introduce collision-resistant seeded extractors and show that any seeded extractor can be made collision resistant at a small overhead in seed length. We then use it to give a two-source non-malleable extractor with entropy rate 0.81 in one source and polylogarithmic in the other. The non-malleable extractor lead to the first statistical privacy amplification protocol against memory tampering adversaries. In the final chapter, we study the hardness of the data structure variant of the 3SUM problem which is motivated by a recent construction to immunise random oracles against pre-processing adversaries. We give worst-case data structure hardness for the 3SUM problem matching known barriers in data structures for adaptive adversaries. We also give a slightly stronger lower bound in the case of non-adaptivity. Lastly, we give a novel result in the bit-probe setting.
Automated Privacy-Preserving Techniques via Meta-Learning
Authors: Tânia Carvalho, Nuno Moniz, Luís Antunes
Subjects: Subjects:
Machine Learning (cs.LG); Cryptography and Security (cs.CR)
Abstract
Sharing private data for learning tasks is pivotal for transparent and secure machine learning applications. Many privacy-preserving techniques have been proposed for this task aiming to transform the data while ensuring the privacy of individuals. Some of these techniques have been incorporated into tools, whereas others are accessed through various online platforms. However, such tools require manual configuration, which can be complex and time-consuming. Moreover, they require substantial expertise, potentially restricting their use to those with advanced technical knowledge. In this paper, we propose AUTOPRIV, the first automated privacy-preservation method, that eliminates the need for any manual configuration. AUTOPRIV employs meta-learning to automate the de-identification process, facilitating the secure release of data for machine learning tasks. The main goal is to anticipate the predictive performance and privacy risk of a large set of privacy configurations. We provide a ranked list of the most promising solutions, which are likely to achieve an optimal approximation within a new domain. AUTOPRIV is highly effective as it reduces computational complexity and energy consumption considerably.
Noisy Neighbors: Efficient membership inference attacks against LLMs
Abstract
The potential of transformer-based LLMs risks being hindered by privacy concerns due to their reliance on extensive datasets, possibly including sensitive information. Regulatory measures like GDPR and CCPA call for using robust auditing tools to address potential privacy issues, with Membership Inference Attacks (MIA) being the primary method for assessing LLMs' privacy risks. Differently from traditional MIA approaches, often requiring computationally intensive training of additional models, this paper introduces an efficient methodology that generates \textit{noisy neighbors} for a target sample by adding stochastic noise in the embedding space, requiring operating the target model in inference mode only. Our findings demonstrate that this approach closely matches the effectiveness of employing shadow models, showing its usability in practical privacy auditing scenarios.
Personalized federated learning based on feature fusion
Abstract
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy. However, due to the heterogeneity of data, models, and devices, the final global model may need to perform better for tasks on each client. Communication bottlenecks, data heterogeneity, and model heterogeneity have been common challenges in federated learning. In this work, we considered a label distribution skew problem, a type of data heterogeneity easily overlooked. In the context of classification, we propose a personalized federated learning approach called pFedPM. In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models. These feature representations play a role in preserving privacy to some extent. We use a hyperparameter $a$ to mix local and global features, which enables us to control the degree of personalization. We also introduced a relation network as an additional decision layer, which provides a non-linear learnable classifier to predict labels. Experimental results show that, with an appropriate setting of $a$, our scheme outperforms several recent FL methods on MNIST, FEMNIST, and CRIFAR10 datasets and achieves fewer communications.
Toward Fairer Face Recognition Datasets
Authors: Alexandre Fournier-Mongieux, Michael Soumm, Adrian Popescu, Bertrand Luvison, Hervé Le Borgne
Subjects: Subjects:
Computer Vision and Pattern Recognition (cs.CV)
Abstract
Face recognition and verification are two computer vision tasks whose performance has progressed with the introduction of deep representations. However, ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development. Generative AI addresses privacy by creating fictitious identities, but fairness problems persist. We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets. We experiment with an existing real dataset, three generated training datasets, and the balanced versions of a diffusion-based dataset. We propose a comprehensive evaluation that considers accuracy and fairness equally and includes a rigorous regression-based statistical analysis of attributes. The analysis shows that balancing reduces demographic unfairness. Also, a performance gap persists despite generation becoming more accurate with time. The proposed balancing method and comprehensive verification evaluation promote fairer and transparent face recognition and verification.
Comment on Chen et al.'s Authentication Protocol for Internet of Health Things
Authors: Iman Jafarian, Siavash Khorsandi
Subjects: Subjects:
Cryptography and Security (cs.CR)
Abstract
The Internet of Medical Things has revolutionized the healthcare industry, enabling the seamless integration of connected medical devices and wearable sensors to enhance patient care and optimize healthcare services. However, the rapid adoption of the Internet of Medical Things also introduces significant security challenges that must be effectively addressed to preserve patient privacy, protect sensitive medical data, and ensure the overall reliability and safety of Internet of Medical Things systems. In this context, a key agreement protocol is used to securely establish shared cryptographic keys between interconnected medical devices and the central system, ensuring confidential and authenticated communication. Recently Chen et al. proposed a lightweight authentication and key agreement protocol for the Internet of health things. In this article, we provide a descriptive analysis of their proposed scheme and prove that Chen et al.'s scheme is vulnerable to Known session-specific temporary information attacks and stolen verifier attacks.
Keyword: machine learning
Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations
Authors: Areg Karapetyan, Aaron Chung Hin Chow, Samer Madanat
Abstract
In light of growing threats posed by climate change in general and sea level rise (SLR) in particular, the necessity for computationally efficient means to estimate and analyze potential coastal flood hazards has become increasingly pressing. Data-driven supervised learning methods serve as promising candidates that can dramatically expedite the process, thereby eliminating the computational bottleneck associated with traditional physics-based hydrodynamic simulators. Yet, the development of accurate and reliable coastal flood prediction models, especially those based on Deep Learning (DL) techniques, has been plagued with two major issues: (1) the scarcity of training data and (2) the high-dimensional output required for detailed inundation mapping. To remove this barrier, we present a systematic framework for training high-fidelity Deep Vision-based coastal flood prediction models in low-data settings. We test the proposed workflow on different existing vision models, including a fully transformer-based architecture and a Convolutional Neural Network (CNN) with additive attention gates. Additionally, we introduce a deep CNN architecture tailored specifically to the coastal flood prediction problem at hand. The model was designed with a particular focus on its compactness so as to cater to resource-constrained scenarios and accessibility aspects. The performance of the developed DL models is validated against commonly adopted geostatistical regression methods and traditional Machine Learning (ML) approaches, demonstrating substantial improvement in prediction quality. Lastly, we round up the contributions by providing a meticulously curated dataset of synthetic flood inundation maps of Abu Dhabi's coast produced with a physics-based hydrodynamic simulator, which can serve as a benchmark for evaluating future coastal flood prediction models.
Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark
Authors: Yili Wang, Yixin Liu, Xu Shen, Chenyu Li, Kaize Ding, Rui Miao, Ying Wang, Shirui Pan, Xin Wang
Abstract
To build safe and reliable graph machine learning systems, unsupervised graph-level anomaly detection (GLAD) and unsupervised graph-level out-of-distribution (OOD) detection (GLOD) have received significant attention in recent years. Though those two lines of research indeed share the same objective, they have been studied independently in the community due to distinct evaluation setups, creating a gap that hinders the application and evaluation of methods from one to the other. To bridge the gap, in this work, we present a Unified Benchmark for unsupervised Graph-level OOD and anomaly Detection (our method), a comprehensive evaluation framework that unifies GLAD and GLOD under the concept of generalized graph-level OOD detection. Our benchmark encompasses 35 datasets spanning four practical anomaly and OOD detection scenarios, facilitating the comparison of 16 representative GLAD/GLOD methods. We conduct multi-dimensional analyses to explore the effectiveness, generalizability, robustness, and efficiency of existing methods, shedding light on their strengths and limitations. Furthermore, we provide an open-source codebase (this https URL) of our method to foster reproducible research and outline potential directions for future investigations based on our insights.
How to Rent GPUs on a Budget
Authors: Zhouzi Li, Benjamin Berg, Arpan Mukhopadhyay, Mor Harchol-Balter
Subjects: Subjects:
Distributed, Parallel, and Cluster Computing (cs.DC); Performance (cs.PF)
Abstract
The explosion in Machine Learning (ML) over the past ten years has led to a dramatic increase in demand for GPUs to train ML models. Because it is prohibitively expensive for most users to build and maintain a large GPU cluster, large cloud providers (Microsoft Azure, Amazon AWS, Google Cloud) have seen explosive growth in demand for renting cloud-based GPUs. In this cloud-computing paradigm, a user must specify their demand for GPUs at every moment in time, and will pay for every GPU-hour they use. ML training jobs are known to be parallelizable to different degrees. Given a stream of ML training jobs, a user typically wants to minimize the mean response time across all jobs. Here, the response time of a job denotes the time from when a job arrives until it is complete. Additionally, the user is constrained by some operating budget. Specifically, in this paper the user is constrained to use no more than $b$ GPUs per hour, over a long-run time average. The question is how to minimize mean response time while meeting the budget constraint. Because training jobs receive a diminishing marginal benefit from running on additional GPUs, allocating too many GPUs to a single training job can dramatically increase the overall cost paid by the user. Hence, an optimal rental policy must balance a tradeoff between training cost and mean response time. This paper derives the optimal rental policy for a stream of training jobs where the jobs have different levels of parallelizability (specified by a speedup function) and different job sizes (amounts of inherent work). We make almost no assumptions about the arrival process and about the job size distribution. Our optimal policy specifies how many GPUs to rent at every moment in time and how to allocate these GPUs.
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Authors: Kathleen C. Fraser, Hillary Dawkins, Svetlana Kiritchenko
Subjects: Subjects:
Computation and Language (cs.CL); Computers and Society (cs.CY)
Abstract
Large language models (LLMs) have advanced to a point that even humans have difficulty discerning whether a text was generated by another human, or by a computer. However, knowing whether a text was produced by human or artificial intelligence (AI) is important to determining its trustworthiness, and has applications in many domains including detecting fraud and academic dishonesty, as well as combating the spread of misinformation and political propaganda. The task of AI-generated text (AIGT) detection is therefore both very challenging, and highly critical. In this survey, we summarize state-of-the art approaches to AIGT detection, including watermarking, statistical and stylistic analysis, and machine learning classification. We also provide information about existing datasets for this task. Synthesizing the research findings, we aim to provide insight into the salient factors that combine to determine how "detectable" AIGT text is under different scenarios, and to make practical recommendations for future work towards this significant technical and societal challenge.
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations
Authors: Parikshit Solunke, Vitoria Guardieiro, Joao Rulff, Peter Xenopoulos, Gromit Yeuk-Yin Chan, Brian Barr, Luis Gustavo Nonato, Claudio Silva
Abstract
With the increasing use of black-box Machine Learning (ML) techniques in critical applications, there is a growing demand for methods that can provide transparency and accountability for model predictions. As a result, a large number of local explainability methods for black-box models have been developed and popularized. However, machine learning explanations are still hard to evaluate and compare due to the high dimensionality, heterogeneous representations, varying scales, and stochastic nature of some of these methods. Topological Data Analysis (TDA) can be an effective method in this domain since it can be used to transform attributions into uniform graph representations, providing a common ground for comparison across different explanation methods. We present a novel topology-driven visual analytics tool, Mountaineer, that allows ML practitioners to interactively analyze and compare these representations by linking the topological graphs back to the original data distribution, model predictions, and feature attributions. Mountaineer facilitates rapid and iterative exploration of ML explanations, enabling experts to gain deeper insights into the explanation techniques, understand the underlying data distributions, and thus reach well-founded conclusions about model behavior. Furthermore, we demonstrate the utility of Mountaineer through two case studies using real-world data. In the first, we show how Mountaineer enabled us to compare black-box ML explanations and discern regions of and causes of disagreements between different explanations. In the second, we demonstrate how the tool can be used to compare and understand ML models themselves. Finally, we conducted interviews with three industry experts to help us evaluate our work.
Physics Informed Machine Learning (PIML) methods for estimating the remaining useful lifetime (RUL) of aircraft engines
Abstract
This paper is aimed at using the newly developing field of physics informed machine learning (PIML) to develop models for predicting the remaining useful lifetime (RUL) aircraft engines. We consider the well-known benchmark NASA Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data as the main data for this paper, which consists of sensor outputs in a variety of different operating modes. C-MAPSS is a well-studied dataset with much existing work in the literature that address RUL prediction with classical and deep learning methods. In the absence of published empirical physical laws governing the C-MAPSS data, our approach first uses stochastic methods to estimate the governing physics models from the noisy time series data. In our approach, we model the various sensor readings as being governed by stochastic differential equations, and we estimate the corresponding transition density mean and variance functions of the underlying processes. We then augment LSTM (long-short term memory) models with the learned mean and variance functions during training and inferencing. Our PIML based approach is different from previous methods, and we use the data to first learn the physics. Our results indicate that PIML discovery and solutions methods are well suited for this problem and outperform previous data-only deep learning methods for this data set and task. Moreover, the framework developed herein is flexible, and can be adapted to other situations (other sensor modalities or combined multi-physics environments), including cases where the underlying physics is only partially observed or known.
Optimization of Trajectories for Machine Learning Training in Robot Accuracy Modeling
Abstract
Recently, machine learning (ML) methods have been developed for increasing the accuracy of robot mechanisms. Complex mechanical issues such as non-linear friction, backlash, flexibility of structure transmission elements can cause these errors and they are hard to model. ML requires training data and the above mechanical phenomena are highly dependent on position of the robot in the workspace and also on its velocity, especially near zero velocity in both directions where non-linearities such as Streibek and Coulomb friction are most pronounced. It is well known that success of ML methods depends on amount of training data and it is expensive/time consuming to collect data from physical robot motion. We therefore address the problem of searching for trajectories in the 6D space of positions and velocities which collect the most information in the least amount of time. This reduces to a special case of the traveling-salesman problem in that the robot must be programmed to visit sampled points in the position-velocity phase space most efficiently. Two goals of this work are 1) Computationally study the difficulty of the TSP in this application by applying it to X, Y, Z motion in 3D space (6D phase space) and 2) assess the effectiveness of an extremely simple Nearest Neighbor search algorithm compared to random sampling of the search space. Results confirm that Nearest Neighbor heuristic searching produces significantly better trajectories than random sampling in this application.
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Abstract
Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML). The rapid proliferation of large language models (LLMs) has stimulated researchers to seek efficient and effective approaches to UQ in text generation tasks, as in addition to their emerging capabilities, these models have introduced new challenges for building safe applications. As with other ML models, LLMs are prone to make incorrect predictions, ``hallucinate'' by fabricating claims, or simply generate low-quality output for a given input. UQ is a key element in dealing with these challenges. However research to date on UQ methods for LLMs has been fragmented, with disparate evaluation methods. In this work, we tackle this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques by researchers in various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across nine tasks and shed light on the most promising approaches.
Inferring Pluggable Types with Machine Learning
Authors: Kazi Amanul Islam Siddiqui, Martin Kellogg
Abstract
Pluggable type systems allow programmers to extend the type system of a programming language to enforce semantic properties defined by the programmer. Pluggable type systems are difficult to deploy in legacy codebases because they require programmers to write type annotations manually. This paper investigates how to use machine learning to infer type qualifiers automatically. We propose a novel representation, NaP-AST, that encodes minimal dataflow hints for the effective inference of type qualifiers. We evaluate several model architectures for inferring type qualifiers, including Graph Transformer Network, Graph Convolutional Network and Large Language Model. We further validated these models by applying them to 12 open-source programs from a prior evaluation of the NullAway pluggable typechecker, lowering warnings in all but one unannotated project. We discovered that GTN shows the best performance, with a recall of .89 and precision of 0.6. Furthermore, we conduct a study to estimate the number of Java classes needed for good performance of the trained model. For our feasibility study, performance improved around 16k classes, and deteriorated due to overfitting around 22k classes.
CasModaTest: A Cascaded and Model-agnostic Self-directed Framework for Unit Test Generation
Abstract
Though many machine learning (ML)-based unit testing generation approaches have been proposed and indeed achieved remarkable performance, they still have several limitations in effectiveness and practical usage. More precisely, existing ML-based approaches (1) generate partial content of a unit test, mainly focusing on test oracle generation; (2) mismatch the test prefix with the test oracle semantically; and (3) are highly bound with the close-sourced model, eventually damaging data security. We propose CasModaTest, a cascaded, model-agnostic, and end-to-end unit test generation framework, to alleviate the above limitations with two cascaded stages: test prefix generation and test oracle generation. Then, we manually build large-scale demo pools to provide CasModaTest with high-quality test prefixes and test oracles examples. Finally, CasModaTest automatically assembles the generated test prefixes and test oracles and compiles or executes them to check their effectiveness, optionally appending with several attempts to fix the errors occurring in compiling and executing phases. To evaluate the effectiveness of CasModaTest, we conduct large-scale experiments on a widely used dataset (Defects4J) and compare it with four state-of-the-art (SOTA) approaches by considering two performance measures. The experimental results indicate that CasModaTest outperforms all SOTAs with a substantial improvement (i.e., 60.62%-352.55% in terms of accuracy, 2.83%-87.27% in terms of focal method coverage). Besides, we also conduct experiments of CasModaTest on different open-source LLMs and find that CasModaTest can also achieve significant improvements over SOTAs (39.82%-293.96% and 9.25%-98.95% in terms of accuracy and focal method coverage, respectively) in end-to-end unit test generation
Privacy Implications of Explainable AI in Data-Driven Systems
Abstract
Machine learning (ML) models, demonstrably powerful, suffer from a lack of interpretability. The absence of transparency, often referred to as the black box nature of ML models, undermines trust and urges the need for efforts to enhance their explainability. Explainable AI (XAI) techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes of these complex models. Techniques like Counterfactual Explanations (CF) and Feature Importance play a crucial role in achieving this goal. Furthermore, high-quality and diverse data remains the foundational element for robust and trustworthy ML applications. In many applications, the data used to train ML and XAI explainers contain sensitive information. In this context, numerous privacy-preserving techniques can be employed to safeguard sensitive information in the data, such as differential privacy. Subsequently, a conflict between XAI and privacy solutions emerges due to their opposing goals. Since XAI techniques provide reasoning for the model behavior, they reveal information relative to ML models, such as their decision boundaries, the values of features, or the gradients of deep learning models when explanations are exposed to a third entity. Attackers can initiate privacy breaching attacks using these explanations, to perform model extraction, inference, and membership attacks. This dilemma underscores the challenge of finding the right equilibrium between understanding ML decision-making and safeguarding privacy.
Abstract
To realize ubiquitous intelligence of future vehicular networks, artificial intelligence (AI) is critical since it can mine knowledge from vehicular data to improve the quality of many AI driven vehicular services. By combining AI techniques with vehicular networks, Vehicular Edge Intelligence (VEI) can utilize the computing, storage, and communication resources of vehicles to train the AI models. Nevertheless, when executing the model training, the traditional centralized learning paradigm requires vehicles to upload their raw data to a central server, which results in significant communication overheads and the risk of privacy leakage. In this article, we first overview the system architectures, performance metrics and challenges ahead of VEI design. Then we propose to utilize distribute machine learning scheme, namely split federated learning (SFL), to boost the development of VEI. We present a novel adaptive and parellel SFL scheme and conduct corresponding analysis on its performance. Future research directions are highlighted to shed light on the efficient design of SFL.
Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations
Authors: Lorenzo Basile, Santiago Acevedo, Luca Bortolussi, Fabio Anselmi, Alex Rodriguez
Subjects: Subjects:
Machine Learning (cs.LG); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
Abstract
To gain insight into the mechanisms behind machine learning methods, it is crucial to establish connections among the features describing data points. However, these correlations often exhibit a high-dimensional and strongly nonlinear nature, which makes them challenging to detect using standard methods. This paper exploits the entanglement between intrinsic dimensionality and correlation to propose a metric that quantifies the (potentially nonlinear) correlation between high-dimensional manifolds. We first validate our method on synthetic data in controlled environments, showcasing its advantages and drawbacks compared to existing techniques. Subsequently, we extend our analysis to large-scale applications in neural network representations. Specifically, we focus on latent representations of multimodal data, uncovering clear correlations between paired visual and textual embeddings, whereas existing methods struggle significantly in detecting similarity. Our results indicate the presence of highly nonlinear correlation patterns between latent manifolds.
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
Authors: Christopher Burger, Charles Walter, Thai Le
Subjects: Subjects:
Machine Learning (cs.LG); Cryptography and Security (cs.CR)
Abstract
Recent work has investigated the vulnerability of local surrogate methods to adversarial perturbations on a machine learning (ML) model's inputs, where the explanation is manipulated while the meaning and structure of the original input remains similar under the complex model. While weaknesses across many methods have been shown to exist, the reasons behind why still remain little explored. Central to the concept of adversarial attacks on explainable AI (XAI) is the similarity measure used to calculate how one explanation differs from another A poor choice of similarity measure can result in erroneous conclusions on the efficacy of an XAI method. Too sensitive a measure results in exaggerated vulnerability, while too coarse understates its weakness. We investigate a variety of similarity measures designed for text-based ranked lists including Kendall's Tau, Spearman's Footrule and Rank-biased Overlap to determine how substantial changes in the type of measure or threshold of success affect the conclusions generated from common adversarial attack processes. Certain measures are found to be overly sensitive, resulting in erroneous estimates of stability.
Credit Attribution and Stable Compression
Authors: Roi Livni, Shay Moran, Kobbi Nissim, Chirag Pabbaraju
Abstract
Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators. We study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of $k$ datapoints. These $k$ datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output. Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm). We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research.
Multistep Criticality Search and Power Shaping in Microreactors with Reinforcement Learning
Authors: Majdi I. Radaideh, Leo Tunkle, Dean Price, Kamal Abdulraheem, Linyu Lin, Moutaz Elias
Subjects: Subjects:
Systems and Control (eess.SY); Computational Engineering, Finance, and Science (cs.CE); Machine Learning (cs.LG); Applications (stat.AP)
Abstract
Reducing operation and maintenance costs is a key objective for advanced reactors in general and microreactors in particular. To achieve this reduction, developing robust autonomous control algorithms is essential to ensure safe and autonomous reactor operation. Recently, artificial intelligence and machine learning algorithms, specifically reinforcement learning (RL) algorithms, have seen rapid increased application to control problems, such as plasma control in fusion tokamaks and building energy management. In this work, we introduce the use of RL for intelligent control in nuclear microreactors. The RL agent is trained using proximal policy optimization (PPO) and advantage actor-critic (A2C), cutting-edge deep RL techniques, based on a high-fidelity simulation of a microreactor design inspired by the Westinghouse eVinci\textsuperscript{TM} design. We utilized a Serpent model to generate data on drum positions, core criticality, and core power distribution for training a feedforward neural network surrogate model. This surrogate model was then used to guide a PPO and A2C control policies in determining the optimal drum position across various reactor burnup states, ensuring critical core conditions and symmetrical power distribution across all six core portions. The results demonstrate the excellent performance of PPO in identifying optimal drum positions, achieving a hextant power tilt ratio of approximately 1.002 (within the limit of $<$ 1.02) and maintaining criticality within a 10 pcm range. A2C did not provide as competitive of a performance as PPO in terms of performance metrics for all burnup steps considered in the cycle. Additionally, the results highlight the capability of well-trained RL control policies to quickly identify control actions, suggesting a promising approach for enabling real-time autonomous control through digital twins.
An Automated SQL Query Grading System Using An Attention-Based Convolutional Neural Network
Authors: Donald R. Schwartz, Pablo Rivas
Subjects: Subjects:
Computers and Society (cs.CY); Artificial Intelligence (cs.AI); Databases (cs.DB); Machine Learning (cs.LG)
Abstract
Grading SQL queries can be a time-consuming, tedious and challenging task, especially as the number of student submissions increases. Several systems have been introduced in an attempt to mitigate these challenges, but those systems have their own limitations. This paper describes our novel approach to automating the process of grading SQL queries. Unlike previous approaches, we employ a unique convolutional neural network architecture that employs a parameter-sharing approach for different machine learning tasks that enables the architecture to induce different knowledge representations of the data to increase its potential for understanding SQL statements.
Towards Exact Computation of Inductive Bias
Authors: Akhilan Boopathy, William Yue, Jaedong Hwang, Abhiram Iyer, Ila Fiete
Abstract
Much research in machine learning involves finding appropriate inductive biases (e.g. convolutional neural networks, momentum-based optimizers, transformers) to promote generalization on tasks. However, quantification of the amount of inductive bias associated with these architectures and hyperparameters has been limited. We propose a novel method for efficiently computing the inductive bias required for generalization on a task with a fixed training data budget; formally, this corresponds to the amount of information required to specify well-generalizing models within a specific hypothesis space of models. Our approach involves modeling the loss distribution of random hypotheses drawn from a hypothesis space to estimate the required inductive bias for a task relative to these hypotheses. Unlike prior work, our method provides a direct estimate of inductive bias without using bounds and is applicable to diverse hypothesis spaces. Moreover, we derive approximation error bounds for our estimation approach in terms of the number of sampled hypotheses. Consistent with prior results, our empirical results demonstrate that higher dimensional tasks require greater inductive bias. We show that relative to other expressive model classes, neural networks as a model class encode large amounts of inductive bias. Furthermore, our measure quantifies the relative difference in inductive bias between different neural network architectures. Our proposed inductive bias metric provides an information-theoretic interpretation of the benefits of specific model architectures for certain tasks and provides a quantitative guide to developing tasks requiring greater inductive bias, thereby encouraging the development of more powerful inductive biases.
Fair Clustering: Critique, Caveats, and Future Directions
Authors: John Dickerson, Seyed A. Esmaeili, Jamie Morgenstern, Claire Jie Zhang
Subjects: Subjects:
Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Data Structures and Algorithms (cs.DS)
Abstract
Clustering is a fundamental problem in machine learning and operations research. Therefore, given the fact that fairness considerations have become of paramount importance in algorithm design, fairness in clustering has received significant attention from the research community. The literature on fair clustering has resulted in a collection of interesting fairness notions and elaborate algorithms. In this paper, we take a critical view of fair clustering, identifying a collection of ignored issues such as the lack of a clear utility characterization and the difficulty in accounting for the downstream effects of a fair clustering algorithm in machine learning settings. In some cases, we demonstrate examples where the application of a fair clustering algorithm can have significant negative impacts on social welfare. We end by identifying a collection of steps that would lead towards more impactful research in fair clustering.
Privacy Preserving Machine Learning for Electronic Health Records using Federated Learning and Differential Privacy
Abstract
An Electronic Health Record (EHR) is an electronic database used by healthcare providers to store patients' medical records which may include diagnoses, treatments, costs, and other personal information. Machine learning (ML) algorithms can be used to extract and analyze patient data to improve patient care. Patient records contain highly sensitive information, such as social security numbers (SSNs) and residential addresses, which introduces a need to apply privacy-preserving techniques for these ML models using federated learning and differential privacy.
Serial Position Effects of Large Language Models
Authors: Xiaobo Guo, Soroush Vosoughi
Subjects: Subjects:
Computation and Language (cs.CL)
Abstract
Large Language Models (LLMs) have shown remarkable capabilities in zero-shot learning applications, generating responses to queries using only pre-training information without the need for additional fine-tuning. This represents a significant departure from traditional machine learning approaches. Previous research has indicated that LLMs may exhibit serial position effects, such as primacy and recency biases, which are well-documented cognitive biases in human psychology. Our extensive testing across various tasks and models confirms the widespread occurrence of these effects, although their intensity varies. We also discovered that while carefully designed prompts can somewhat mitigate these biases, their effectiveness is inconsistent. These findings underscore the significance of serial position effects during the inference process, particularly in scenarios where there are no ground truth labels, highlighting the need for greater focus on addressing these effects in LLM applications.
Learning with Noisy Ground Truth: From 2D Classification to 3D Reconstruction
Abstract
Deep neural networks has been highly successful in data-intense computer vision applications, while such success relies heavily on the massive and clean data. In real-world scenarios, clean data sometimes is difficult to obtain. For example, in image classification and segmentation tasks, precise annotations of millions samples are generally very expensive and time-consuming. In 3D static scene reconstruction task, most NeRF related methods require the foundational assumption of the static scene (e.g. consistent lighting condition and persistent object positions), which is often violated in real-world scenarios. To address these problem, learning with noisy ground truth (LNGT) has emerged as an effective learning method and shows great potential. In this short survey, we propose a formal definition unify the analysis of LNGT LNGT in the context of different machine learning tasks (classification and regression). Based on this definition, we propose a novel taxonomy to classify the existing work according to the error decomposition with the fundamental definition of machine learning. Further, we provide in-depth analysis on memorization effect and insightful discussion about potential future research opportunities from 2D classification to 3D reconstruction, in the hope of providing guidance to follow-up research.
Towards Biologically Plausible Computing: A Comprehensive Comparison
Abstract
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning, which uses a gradient descent method to update network weights by minimizing the discrepancy between actual and desired outputs. Despite its pivotal role in propelling deep learning advancements, the biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training. To address this long-standing challenge, many studies have endeavored to devise biologically plausible training algorithms. However, a fully biologically plausible algorithm for training multilayer neural networks remains elusive, and interpretations of biological plausibility vary among researchers. In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet. Using these criteria, we evaluate a range of existing algorithms considered to be biologically plausible, including Hebbian learning, spike-timing-dependent plasticity, feedback alignment, target propagation, predictive coding, forward-forward algorithm, perturbation learning, local losses, and energy-based learning. Additionally, we empirically evaluate these algorithms across diverse network architectures and datasets. We compare the feature representations learned by these algorithms with brain activity recorded by non-invasive devices under identical stimuli, aiming to identify which algorithm can most accurately replicate brain activity patterns. We are hopeful that this study could inspire the development of new biologically plausible algorithms for training multilayer networks, thereby fostering progress in both the fields of neuroscience and machine learning.
Review of Zero-Shot and Few-Shot AI Algorithms in The Medical Domain
Authors: Maged Badawi, Mohammedyahia Abushanab, Sheethal Bhat, Andreas Maier
Subjects: Subjects:
Computer Vision and Pattern Recognition (cs.CV)
Abstract
In this paper, different techniques of few-shot, zero-shot, and regular object detection have been investigated. The need for few-shot learning and zero-shot learning techniques is crucial and arises from the limitations and challenges in traditional machine learning, deep learning, and computer vision methods where they require large amounts of data, plus the poor generalization of those traditional methods. Those techniques can give us prominent results by using only a few training sets reducing the required amounts of data and improving the generalization. This survey will highlight the recent papers of the last three years that introduce the usage of few-shot learning and zero-shot learning techniques in addressing the challenges mentioned earlier. In this paper we reviewed the Zero-shot, few-shot and regular object detection methods and categorized them in an understandable manner. Based on the comparison made within each category. It been found that the approaches are quite impressive. This integrated review of diverse papers on few-shot, zero-shot, and regular object detection reveals a shared focus on advancing the field through novel frameworks and techniques. A noteworthy observation is the scarcity of detailed discussions regarding the difficulties encountered during the development phase. Contributions include the introduction of innovative models, such as ZSD-YOLO and GTNet, often showcasing improvements with various metrics such as mean average precision (mAP),Recall@100 (RE@100), the area under the receiver operating characteristic curve (AUROC) and precision. These findings underscore a collective move towards leveraging vision-language models for versatile applications, with potential areas for future research including a more thorough exploration of limitations and domain-specific adaptations.
Accelerating Matrix Diagonalization through Decision Transformers with Epsilon-Greedy Optimization
Authors: Kshitij Bhatta, Geigh Zollicoffer, Manish Bhattarai, Phil Romero, Christian F. A. Negre, Anders M. N. Niklasson, Adetokunbo Adedoyin
Abstract
This paper introduces a novel framework for matrix diagonalization, recasting it as a sequential decision-making problem and applying the power of Decision Transformers (DTs). Our approach determines optimal pivot selection during diagonalization with the Jacobi algorithm, leading to significant speedups compared to the traditional max-element Jacobi method. To bolster robustness, we integrate an epsilon-greedy strategy, enabling success in scenarios where deterministic approaches fail. This work demonstrates the effectiveness of DTs in complex computational tasks and highlights the potential of reimagining mathematical operations through a machine learning lens. Furthermore, we establish the generalizability of our method by using transfer learning to diagonalize matrices of smaller sizes than those trained.
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Authors: Debeshee Das, Jie Zhang, Florian Tramèr
Subjects: Subjects:
Cryptography and Security (cs.CR); Computation and Language (cs.CL); Machine Learning (cs.LG)
Abstract
Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model. For foundation models trained on unknown Web data, MI attacks can be used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning. Unfortunately, we find that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions. For 8 published MI evaluation datasets, we show that blind attacks -- that distinguish the member and non-member distributions without looking at any trained model -- outperform state-of-the-art MI attacks. Existing evaluations thus tell us nothing about membership leakage of a foundation model's training data.
Learning Run-time Safety Monitors for Machine Learning Components
Authors: Ozan Vardal, Richard Hawkins, Colin Paterson, Chiara Picardi, Daniel Omeiza, Lars Kunze, Ibrahim Habli
Abstract
For machine learning components used as part of autonomous systems (AS) in carrying out critical tasks it is crucial that assurance of the models can be maintained in the face of post-deployment changes (such as changes in the operating environment of the system). A critical part of this is to be able to monitor when the performance of the model at runtime (as a result of changes) poses a safety risk to the system. This is a particularly difficult challenge when ground truth is unavailable at runtime. In this paper we introduce a process for creating safety monitors for ML components through the use of degraded datasets and machine learning. The safety monitor that is created is deployed to the AS in parallel to the ML component to provide a prediction of the safety risk associated with the model output. We demonstrate the viability of our approach through some initial experiments using publicly available speed sign datasets.
Evaluating Serverless Machine Learning Performance on Google Cloud Run
Authors: Prerana Khatiwada, Pranjal Dhakal
Subjects: Subjects:
Distributed, Parallel, and Cluster Computing (cs.DC); Operating Systems (cs.OS)
Abstract
End-users can get functions-as-a-service from serverless platforms, which promise lower hosting costs, high availability, fault tolerance, and dynamic flexibility for hosting individual functions known as microservices. Machine learning tools are seen to be reliably useful, and the services created using these tools are in increasing demand on a large scale. The serverless platforms are uniquely suited for hosting these machine learning services to be used for large-scale applications. These platforms are well known for their cost efficiency, fault tolerance, resource scaling, robust APIs for communication, and global reach. However, machine learning services are different from the web-services in that these serverless platforms were originally designed to host web services. We aimed to understand how these serverless platforms handle machine learning workloads with our study. We examine machine learning performance on one of the serverless platforms - Google Cloud Run, which is a GPU-less infrastructure that is not designed for machine learning application deployment.
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
Authors: Somnath Basu Roy Chowdhury, Krzysztof Choromanski, Arijit Sehanobish, Avinava Dubey, Snigdha Chaturvedi
Abstract
Machine unlearning is the process of efficiently removing the influence of a training data instance from a trained machine learning model without retraining it from scratch. A popular subclass of unlearning approaches is exact machine unlearning, which focuses on techniques that explicitly guarantee the removal of the influence of a data instance from a model. Exact unlearning approaches use a machine learning model in which individual components are trained on disjoint subsets of the data. During deletion, exact unlearning approaches only retrain the affected components rather than the entire model. While existing approaches reduce retraining costs, it can still be expensive for an organization to retrain a model component as it requires halting a system in production, which leads to service failure and adversely impacts customers. To address these challenges, we introduce an exact unlearning framework -- Sequence-aware Sharded Sliced Training (S3T), designed to enhance the deletion capabilities of an exact unlearning system while minimizing the impact on model's performance. At the core of S3T, we utilize a lightweight parameter-efficient fine-tuning approach that enables parameter isolation by sequentially training layers with disjoint data slices. This enables efficient unlearning by simply deactivating the layers affected by data deletion. Furthermore, to reduce the retraining cost and improve model performance, we train the model on multiple data sequences, which allows S3T to handle an increased number of deletion requests. Both theoretically and empirically, we demonstrate that S3T attains superior deletion capabilities and enhanced performance compared to baselines across a wide range of settings.
User Story Tutor (UST) to Support Agile Software Developers
Authors: Giseldo da Silva Neo, José Antão Beltrão Moura, Hyggo Oliveira de Almeida, Alana Viana Borges da Silva Neo, Olival de Gusmão Freitas Júnior
Abstract
User Stories record what must be built in projects that use agile practices. User Stories serve both to estimate effort, generally measured in Story Points, and to plan what should be done in a Sprint. Therefore, it is essential to train software engineers on how to create simple, easily readable, and comprehensive User Stories. For that reason, we designed, implemented, applied, and evaluated a web application called User Story Tutor (UST). UST checks the description of a given User Story for readability, and if needed, recommends appropriate practices for improvement. UST also estimates a User Story effort in Story Points using Machine Learning techniques. As such UST may support the continuing education of agile development teams when writing and reviewing User Stories. UST's ease of use was evaluated by 40 agile practitioners according to the Technology Acceptance Model (TAM) and AttrakDiff. The TAM evaluation averages were good in almost all considered variables. Application of the AttrakDiff evaluation framework produced similar good results. Apparently, UST can be used with good reliability. Applying UST to assist in the construction of User Stories is a viable technique that, at the very least, can be used by agile developments to complement and enhance current User Story creation.
Learning-Based Heavy Hitters and Flow Frequency Estimation in Streams
Authors: Rana Shahout, Michael Mitzenmacher
Subjects: Subjects:
Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG)
Abstract
Identifying heavy hitters and estimating the frequencies of flows are fundamental tasks in various network domains. Existing approaches to this challenge can broadly be categorized into two groups, hashing-based and competing-counter-based. The Count-Min sketch is a standard example of a hashing-based algorithm, and the Space Saving algorithm is an example of a competing-counter algorithm. Recent works have explored the use of machine learning to enhance algorithms for frequency estimation problems, under the algorithms with prediction framework. However, these works have focused solely on the hashing-based approach, which may not be best for identifying heavy hitters. In this paper, we present the first learned competing-counter-based algorithm, called LSS, for identifying heavy hitters, top k, and flow frequency estimation that utilizes the well-known Space Saving algorithm. We provide theoretical insights into how and to what extent our approach can improve upon Space Saving, backed by experimental results on both synthetic and real-world datasets. Our evaluation demonstrates that LSS can enhance the accuracy and efficiency of Space Saving in identifying heavy hitters, top k, and estimating flow frequencies.
AnnotatedTables: A Large Tabular Dataset with Language Model Annotations
Authors: Yaojie Hu, Ilias Fountalis, Jin Tian, Nikolaos Vasiloglou
Abstract
Tabular data is ubiquitous in real-world applications and abundant on the web, yet its annotation has traditionally required human labor, posing a significant scalability bottleneck for tabular machine learning. Our methodology can successfully annotate a large amount of tabular data and can be flexibly steered to generate various types of annotations based on specific research objectives, as we demonstrate with SQL annotation and input-target column annotation as examples. As a result, we release AnnotatedTables, a collection of 32,119 databases with LLM-generated annotations. The dataset includes 405,616 valid SQL programs, making it the largest SQL dataset with associated tabular data that supports query execution. To further demonstrate the value of our methodology and dataset, we perform two follow-up research studies. 1) We investigate whether LLMs can translate SQL programs to Rel programs, a database language previously unknown to LLMs, while obtaining the same execution results. Using our Incremental Prompt Engineering methods based on execution feedback, we show that LLMs can produce adequate translations with few-shot learning. 2) We evaluate the performance of TabPFN, a recent neural tabular classifier trained on Bayesian priors, on 2,720 tables with input-target columns identified and annotated by LLMs. On average, TabPFN performs on par with the baseline AutoML method, though the relative performance can vary significantly from one data table to another, making both models viable for practical applications depending on the situation. Our findings underscore the potential of LLMs in automating the annotation of large volumes of diverse tabular data.
Machine Learning with Real-time and Small Footprint Anomaly Detection System for In-Vehicle Gateway
Authors: Yi Wang, Yuanjin Zheng, Yajun Ha
Subjects: Subjects:
Cryptography and Security (cs.CR)
Abstract
Anomaly Detection System (ADS) is an essential part of a modern gateway Electronic Control Unit (ECU) to detect abnormal behaviors and attacks in vehicles. Among the existing attacks, one-time attack is the most challenging to be detected, together with the strict gateway ECU constraints of both microsecond or even nanosecond level real-time budget and limited footprint of code. To address the challenges, we propose to use the self-information theory to generate values for training and testing models, aiming to achieve real-time detection performance for the one-time attack that has not been well studied in the past. Second, the generation of self-information is based on logarithm calculation, which leads to the smallest footprint to reduce the cost in Gateway. Finally, our proposed method uses an unsupervised model without the need of training data for anomalies or attacks. We have compared different machine learning methods ranging from typical machine learning models to deep learning models, e.g., Hidden Markov Model (HMM), Support Vector Data Description (SVDD), and Long Short Term Memory (LSTM). Experimental results show that our proposed method achieves 8.7 times lower False Positive Rate (FPR), 1.77 times faster testing time, and 4.88 times smaller footprint.
Automated Privacy-Preserving Techniques via Meta-Learning
Authors: Tânia Carvalho, Nuno Moniz, Luís Antunes
Subjects: Subjects:
Machine Learning (cs.LG); Cryptography and Security (cs.CR)
Abstract
Sharing private data for learning tasks is pivotal for transparent and secure machine learning applications. Many privacy-preserving techniques have been proposed for this task aiming to transform the data while ensuring the privacy of individuals. Some of these techniques have been incorporated into tools, whereas others are accessed through various online platforms. However, such tools require manual configuration, which can be complex and time-consuming. Moreover, they require substantial expertise, potentially restricting their use to those with advanced technical knowledge. In this paper, we propose AUTOPRIV, the first automated privacy-preservation method, that eliminates the need for any manual configuration. AUTOPRIV employs meta-learning to automate the de-identification process, facilitating the secure release of data for machine learning tasks. The main goal is to anticipate the predictive performance and privacy risk of a large set of privacy configurations. We provide a ranked list of the most promising solutions, which are likely to achieve an optimal approximation within a new domain. AUTOPRIV is highly effective as it reduces computational complexity and energy consumption considerably.
Deepfake tweets automatic detection
Authors: Adam Frej, Adrian Kaminski, Piotr Marciniak, Szymon Szmajdzinski, Soveatin Kuntur, Anna Wroblewska
Subjects: Subjects:
Computation and Language (cs.CL)
Abstract
This study addresses the critical challenge of detecting DeepFake tweets by leveraging advanced natural language processing (NLP) techniques to distinguish between genuine and AI-generated texts. Given the increasing prevalence of misinformation, our research utilizes the TweepFake dataset to train and evaluate various machine learning models. The objective is to identify effective strategies for recognizing DeepFake content, thereby enhancing the integrity of digital communications. By developing reliable methods for detecting AI-generated misinformation, this work contributes to a more trustworthy online information environment.
SyROCCo: Enhancing Systematic Reviews using Machine Learning
Authors: Zheng Fang, Miguel Arana-Catania, Felix-Anselm van Lier, Juliana Outes Velarde, Harry Bregazzi, Mara Airoldi, Eleanor Carter, Rob Procter
Subjects: Subjects:
Computation and Language (cs.CL); Computers and Society (cs.CY); Digital Libraries (cs.DL); Machine Learning (cs.LG)
Abstract
The sheer number of research outputs published every year makes systematic reviewing increasingly time- and resource-intensive. This paper explores the use of machine learning techniques to help navigate the systematic review process. ML has previously been used to reliably 'screen' articles for review - that is, identify relevant articles based on reviewers' inclusion criteria. The application of ML techniques to subsequent stages of a review, however, such as data extraction and evidence mapping, is in its infancy. We therefore set out to develop a series of tools that would assist in the profiling and analysis of 1,952 publications on the theme of 'outcomes-based contracting'. Tools were developed for the following tasks: assign publications into 'policy area' categories; identify and extract key information for evidence mapping, such as organisations, laws, and geographical information; connect the evidence base to an existing dataset on the same topic; and identify subgroups of articles that may share thematic content. An interactive tool using these techniques and a public dataset with their outputs have been released. Our results demonstrate the utility of ML techniques to enhance evidence accessibility and analysis within the systematic review processes. These efforts show promise in potentially yielding substantial efficiencies for future systematic reviewing and for broadening their analytical scope. Our work suggests that there may be implications for the ease with which policymakers and practitioners can access evidence. While ML techniques seem poised to play a significant role in bridging the gap between research and policy by offering innovative ways of gathering, accessing, and analysing data from systematic reviews, we also highlight their current limitations and the need to exercise caution in their application, particularly given the potential for errors and biases.
GNNTAL:A Novel Model for Identifying Critical Nodes in Complex Networks
Authors: Hao Wang, Ting Luo, Shuang-ping Yang, Ming Jing, Jian Wang, Na Zhao
Subjects: Subjects:
Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)
Abstract
Identification of critical nodes is a prominent topic in the study of complex networks. Numerous methods have been proposed, yet most exhibit inherent limitations. Traditional approaches primarily analyze specific structural features of the network; however, node influence is typically the result of a combination of multiple factors. Machine learning-based methods struggle to effectively represent the complex characteristics of network structures through suitable embedding techniques and require substantial data for training, rendering them prohibitively costly for large-scale networks. To address these challenges, this paper presents an active learning model based on GraphSAGE and Transformer, named GNNTAL. This model is initially pre-trained on random or synthetic networks and subsequently fine-tuned on real-world networks by selecting a few representative nodes using K-Means clustering and uncertainty sampling. This approach offers two main advantages: (1) it significantly reduces training costs; (2) it simultaneously incorporates both local and global features. A series of comparative experiments conducted on twelve real-world networks demonstrate that GNNTAL achieves superior performance. Additionally, this paper proposes an influence maximization method based on the predictions of the GNNTAL model, which achieves optimal performance without the need for complex computations. Finally, the paper analyses certain limitations of the GNNTAL model and suggests potential solutions.
Cherry on the Cake: Fairness is NOT an Optimization Problem
Authors: Marco Favier, Toon Calders
Subjects: Subjects:
Machine Learning (cs.LG); Computers and Society (cs.CY); Computer Science and Game Theory (cs.GT)
Abstract
Fair cake-cutting is a mathematical subfield that studies the problem of fairly dividing a resource among a number of participants. The so-called cake,'' as an object, represents any resource that can be distributed among players. This concept is connected to supervised multi-label classification: any dataset can be thought of as a cake that needs to be distributed, where each label is a player that receives its share of the dataset. In particular, any efficient cake-cutting solution for the dataset is equivalent to an optimal decision function. Although we are not the first to demonstrate this connection, the important ramifications of this parallel seem to have been partially forgotten. We revisit these classical results and demonstrate how this connection can be prolifically used for fairness in machine learning problems. Understanding the set of achievable fair decisions is a fundamental step in finding optimal fair solutions and satisfying fairness requirements. By employing the tools of cake-cutting theory, we have been able to describe the behavior of optimal fair decisions, which, counterintuitively, often exhibit quite unfair properties. Specifically, in order to satisfy fairness constraints, it is sometimes preferable, in the name of optimality, to purposefully make mistakes and deny giving the positive label to deserving individuals in a community in favor of less worthy individuals within the same community. This practice is known in the literature as cherry-picking and has been described asblatantly unfair.''
Abstract
In recent years, the number of new applications for highly complex AI systems has risen significantly. Algorithmic decision-making systems (ADMs) are one of such applications, where an AI system replaces the decision-making process of a human expert. As one approach to ensure fairness and transparency of such systems, explainable AI (XAI) has become more important. One variant to achieve explainability are surrogate models, i.e., the idea to train a new simpler machine learning model based on the input-output-relationship of a black box model. The simpler machine learning model could, for example, be a decision tree, which is thought to be intuitively understandable by humans. However, there is not much insight into how well the surrogate model approximates the black box. Our main assumption is that a good surrogate model approach should be able to bring such a discriminating behavior to the attention of humans; prior to our research we assumed that a surrogate decision tree would identify such a pattern on one of its first levels. However, in this article we show that even if the discriminated subgroup - while otherwise being the same in all categories - does not get a single positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted by the operator of the system. We then generalize this finding to pinpoint the exact level of the tree on which the discriminating question is asked and show that in a more realistic scenario, where discrimination only occurs to some fraction of the disadvantaged group, it is even more feasible to hide such discrimination. Our approach can be generalized easily to other surrogate models.
Cubic regularized subspace Newton for non-convex optimization
Authors: Jim Zhao, Aurelien Lucchi, Nikita Doikov
Subjects: Subjects:
Machine Learning (cs.LG); Numerical Analysis (math.NA); Optimization and Control (math.OC)
Abstract
This paper addresses the optimization problem of minimizing non-convex continuous functions, which is relevant in the context of high-dimensional machine learning applications characterized by over-parametrization. We analyze a randomized coordinate second-order method named SSCN which can be interpreted as applying cubic regularization in random subspaces. This approach effectively reduces the computational complexity associated with utilizing second-order information, rendering it applicable in higher-dimensional scenarios. Theoretically, we establish convergence guarantees for non-convex functions, with interpolating rates for arbitrary subspace sizes and allowing inexact curvature estimation. When increasing subspace size, our complexity matches $\mathcal{O}(\epsilon^{-3/2})$ of the cubic regularization (CR) rate. Additionally, we propose an adaptive sampling scheme ensuring exact convergence rate of $\mathcal{O}(\epsilon^{-3/2}, \epsilon^{-3})$ to a second-order stationary point, even without sampling all coordinates. Experimental results demonstrate substantial speed-ups achieved by SSCN compared to conventional first-order methods.
Addressing Polarization and Unfairness in Performative Prediction
Authors: Kun Jin, Tian Xie, Yang Liu, Xueru Zhang
Subjects: Subjects:
Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
Abstract
When machine learning (ML) models are used in applications that involve humans (e.g., online recommendation, school admission, hiring, lending), the model itself may trigger changes in the distribution of targeted data it aims to predict. Performative prediction (PP) is a framework that explicitly considers such model-dependent distribution shifts when learning ML models. While significant efforts have been devoted to finding performative stable (PS) solutions in PP for system robustness, their societal implications are less explored and it is unclear whether PS solutions are aligned with social norms such as fairness. In this paper, we set out to examine the fairness property of PS solutions in performative prediction. We first show that PS solutions can incur severe polarization effects and group-wise loss disparity. Although existing fairness mechanisms commonly used in literature can help mitigate unfairness, they may fail and disrupt the stability under model-dependent distribution shifts. We thus propose novel fairness intervention mechanisms that can simultaneously achieve both stability and fairness in PP settings. Both theoretical analysis and experiments are provided to validate the proposed method.
General Binding Affinity Guidance for Diffusion Models in Structure-Based Drug Design
Authors: Yue Jian, Curtis Wu, Danny Reidenbach, Aditi S. Krishnapriyan
Abstract
Structure-Based Drug Design (SBDD) focuses on generating valid ligands that strongly and specifically bind to a designated protein pocket. Several methods use machine learning for SBDD to generate these ligands in 3D space, conditioned on the structure of a desired protein pocket. Recently, diffusion models have shown success here by modeling the underlying distributions of atomic positions and types. While these methods are effective in considering the structural details of the protein pocket, they often fail to explicitly consider the binding affinity. Binding affinity characterizes how tightly the ligand binds to the protein pocket, and is measured by the change in free energy associated with the binding process. It is one of the most crucial metrics for benchmarking the effectiveness of the interaction between a ligand and protein pocket. To address this, we propose BADGER: Binding Affinity Diffusion Guidance with Enhanced Refinement. BADGER is a general guidance method to steer the diffusion sampling process towards improved protein-ligand binding, allowing us to adjust the distribution of the binding affinity between ligands and proteins. Our method is enabled by using a neural network (NN) to model the energy function, which is commonly approximated by AutoDock Vina (ADV). ADV's energy function is non-differentiable, and estimates the affinity based on the interactions between a ligand and target protein receptor. By using a NN as a differentiable energy function proxy, we utilize the gradient of our learned energy function as a guidance method on top of any trained diffusion model. We show that our method improves the binding affinity of generated ligands to their protein receptors by up to 60\%, significantly surpassing previous machine learning methods. We also show that our guidance method is flexible and can be easily applied to other diffusion-based SBDD frameworks.
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Authors: Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, Zaid Harchaoui
Subjects: Subjects:
Computation and Language (cs.CL); Machine Learning (cs.LG)
Abstract
One of the most striking findings in modern research on large language models (LLMs) is that scaling up compute during training leads to better results. However, less attention has been given to the benefits of scaling compute during inference. This survey focuses on these inference-time approaches. We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation. Token-level generation algorithms, often called decoding algorithms, operate by sampling a single token at a time or constructing a token-level search space and then selecting an output. These methods typically assume access to a language model's logits, next-token distributions, or probability scores. Meta-generation algorithms work on partial or full sequences, incorporating domain knowledge, enabling backtracking, and integrating external information. Efficient generation methods aim to reduce token costs and improve the speed of generation. Our survey unifies perspectives from three research communities: traditional natural language processing, modern LLMs, and machine learning systems.
Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection
Authors: Saachi Jain, Kimia Hamidieh, Kristian Georgiev, Andrew Ilyas, Marzyeh Ghassemi, Aleksander Madry
Subjects: Subjects:
Machine Learning (cs.LG); Computers and Society (cs.CY); Machine Learning (stat.ML)
Abstract
Machine learning models can fail on subgroups that are underrepresented during training. While techniques such as dataset balancing can improve performance on underperforming groups, they require access to training group annotations and can end up removing large portions of the dataset. In this paper, we introduce Data Debiasing with Datamodels (D3M), a debiasing approach which isolates and removes specific training examples that drive the model's failures on minority groups. Our approach enables us to efficiently train debiased classifiers while removing only a small number of examples, and does not require training group annotations or additional hyperparameter tuning.
Keyword: differential privacy
Privacy Implications of Explainable AI in Data-Driven Systems
Credit Attribution and Stable Compression
Privacy Preserving Machine Learning for Electronic Health Records using Federated Learning and Differential Privacy
On Computing Pairwise Statistics with Local Differential Privacy
Keyword: privacy
Artificial Intelligence, VR, AR and Metaverse Technologies for Human Resources Management
An Exploratory Study on Human-Centric Video Anomaly Detection through Variational Autoencoders and Trajectory Prediction
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Blockchain for Academic Integrity: Developing the Blockchain Academic Credential Interoperability Protocol (BACIP)
DataFreeShield: Defending Adversarial Attacks without Training Data
ProBE: Proportioning Privacy Budget for Complex Exploratory Decision Support
Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in Federated Learning
EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
Data Issues in Industrial AI System: A Meta-Review and Research Strategy
Privacy Implications of Explainable AI in Data-Driven Systems
Rethinking Entity-level Unlearning for Large Language Models
Split Federated Learning Empowered Vehicular Edge Intelligence: Adaptive Parellel Design and Future Directions
Privacy Requirements and Realities of Digital Public Goods
Credit Attribution and Stable Compression
Privacy Preserving Machine Learning for Electronic Health Records using Federated Learning and Differential Privacy
Meta-FL: A Novel Meta-Learning Framework for Optimizing Heterogeneous Model Aggregation in Federated Learning
Crepe: A Mobile Screen Data Collector Using Graph Query
Privacy-Preserving and Trustworthy Localization in an IoT Environment
On Computing Pairwise Statistics with Local Differential Privacy
Thinking Inside The Box: Privacy Against Stronger Adversaries
Automated Privacy-Preserving Techniques via Meta-Learning
Noisy Neighbors: Efficient membership inference attacks against LLMs
Personalized federated learning based on feature fusion
Toward Fairer Face Recognition Datasets
Comment on Chen et al.'s Authentication Protocol for Internet of Health Things
Keyword: machine learning
Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations
Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark
How to Rent GPUs on a Budget
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations
Physics Informed Machine Learning (PIML) methods for estimating the remaining useful lifetime (RUL) of aircraft engines
Optimization of Trajectories for Machine Learning Training in Robot Accuracy Modeling
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Inferring Pluggable Types with Machine Learning
CasModaTest: A Cascaded and Model-agnostic Self-directed Framework for Unit Test Generation
Privacy Implications of Explainable AI in Data-Driven Systems
Split Federated Learning Empowered Vehicular Edge Intelligence: Adaptive Parellel Design and Future Directions
Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
Credit Attribution and Stable Compression
Multistep Criticality Search and Power Shaping in Microreactors with Reinforcement Learning
An Automated SQL Query Grading System Using An Attention-Based Convolutional Neural Network
Towards Exact Computation of Inductive Bias
Fair Clustering: Critique, Caveats, and Future Directions
Privacy Preserving Machine Learning for Electronic Health Records using Federated Learning and Differential Privacy
Serial Position Effects of Large Language Models
Learning with Noisy Ground Truth: From 2D Classification to 3D Reconstruction
Towards Biologically Plausible Computing: A Comprehensive Comparison
Review of Zero-Shot and Few-Shot AI Algorithms in The Medical Domain
Accelerating Matrix Diagonalization through Decision Transformers with Epsilon-Greedy Optimization
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Learning Run-time Safety Monitors for Machine Learning Components
Evaluating Serverless Machine Learning Performance on Google Cloud Run
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
User Story Tutor (UST) to Support Agile Software Developers
Learning-Based Heavy Hitters and Flow Frequency Estimation in Streams
AnnotatedTables: A Large Tabular Dataset with Language Model Annotations
Machine Learning with Real-time and Small Footprint Anomaly Detection System for In-Vehicle Gateway
one-time
attack is the most challenging to be detected, together with the strict gateway ECU constraints of both microsecond or even nanosecond level real-time budget and limited footprint of code. To address the challenges, we propose to use the self-information theory to generate values for training and testing models, aiming to achieve real-time detection performance for theone-time
attack that has not been well studied in the past. Second, the generation of self-information is based on logarithm calculation, which leads to the smallest footprint to reduce the cost in Gateway. Finally, our proposed method uses an unsupervised model without the need of training data for anomalies or attacks. We have compared different machine learning methods ranging from typical machine learning models to deep learning models, e.g., Hidden Markov Model (HMM), Support Vector Data Description (SVDD), and Long Short Term Memory (LSTM). Experimental results show that our proposed method achieves 8.7 times lower False Positive Rate (FPR), 1.77 times faster testing time, and 4.88 times smaller footprint.Automated Privacy-Preserving Techniques via Meta-Learning
Deepfake tweets automatic detection
SyROCCo: Enhancing Systematic Reviews using Machine Learning
GNNTAL:A Novel Model for Identifying Critical Nodes in Complex Networks
Cherry on the Cake: Fairness is NOT an Optimization Problem
cake,'' as an object, represents any resource that can be distributed among players. This concept is connected to supervised multi-label classification: any dataset can be thought of as a cake that needs to be distributed, where each label is a player that receives its share of the dataset. In particular, any efficient cake-cutting solution for the dataset is equivalent to an optimal decision function. Although we are not the first to demonstrate this connection, the important ramifications of this parallel seem to have been partially forgotten. We revisit these classical results and demonstrate how this connection can be prolifically used for fairness in machine learning problems. Understanding the set of achievable fair decisions is a fundamental step in finding optimal fair solutions and satisfying fairness requirements. By employing the tools of cake-cutting theory, we have been able to describe the behavior of optimal fair decisions, which, counterintuitively, often exhibit quite unfair properties. Specifically, in order to satisfy fairness constraints, it is sometimes preferable, in the name of optimality, to purposefully make mistakes and deny giving the positive label to deserving individuals in a community in favor of less worthy individuals within the same community. This practice is known in the literature as cherry-picking and has been described as
blatantly unfair.''Hacking a surrogate model approach to XAI
Cubic regularized subspace Newton for non-convex optimization
Addressing Polarization and Unfairness in Performative Prediction
General Binding Affinity Guidance for Diffusion Models in Structure-Based Drug Design
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection