Abstract
The concept of differential privacy (DP) can quantitatively measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset. The DP, which is generally used as a constraint, has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google. A common methodology for guaranteeing DP is incorporating appropriate noise into query outputs, thereby establishing statistical defense systems against privacy attacks such as membership inference and linkage attacks. However, especially for small datasets, existing DP mechanisms occasionally add excessive amount of noise to query output, thereby discarding data utility. This is because the traditional DP computes privacy loss based on the worst-case scenario, i.e., statistical outliers. In this work, to tackle this challenge, we utilize per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances. In a nutshell, we propose a per-instance noise variance optimization (NVO) game, framed as a common interest sequential game, and show that the Nash equilibrium (NE) points of it inherently guarantee pDP for all data instances. Through extensive experiments, our proposed pDP algorithm demonstrated an average performance improvement of up to 99.53% compared to the conventional DP algorithm in terms of KL divergence.
Keyword: privacy
Quantifying Social Presence in Mixed Reality: A Contemporary Review of Techniques and Innovations
Abstract
This literature review investigates the transformative potential of mixed reality (MR) technology, where we explore the intersection of contemporary technological advancements, modern deep learning recommendation systems, and social psychology frameworks. This interdisciplinary study informs the understanding of MR's role in improving social presence, catalyzing novel social interactions, and enhancing the quality of interpersonal communication in the real world. We also discuss the challenges and barriers blocking the wide-spread adoption of social networking in MR, such as device constraints, privacy and accessibility concerns, and social norms. Through carefully structured, closed-environment experiments with diverse participants of varying levels of digital literacy, we measure the differences in social dynamics, frequency, quality, and duration of interactions, and levels of social anxiety between MR-enhanced, mobile-enhanced, and control condition participants.
ControlTraj: Controllable Trajectory Generation with Topology-Constrained Diffusion Model
Authors: Yuanshao Zhu, James Jianqiao Yu, Xiangyu Zhao, Qidong Liu, Yongchao Ye, Wei Chen, Zijian Zhang, Xuetao Wei, Yuxuan Liang
Abstract
Generating trajectory data is among promising solutions to addressing privacy concerns, collection costs, and proprietary restrictions usually associated with human mobility analyses. However, existing trajectory generation methods are still in their infancy due to the inherent diversity and unpredictability of human activities, grappling with issues such as fidelity, flexibility, and generalizability. To overcome these obstacles, we propose ControlTraj, a Controllable Trajectory generation framework with the topology-constrained diffusion model. Distinct from prior approaches, ControlTraj utilizes a diffusion model to generate high-fidelity trajectories while integrating the structural constraints of road network topology to guide the geographical outcomes. Specifically, we develop a novel road segment autoencoder to extract fine-grained road segment embedding. The encoded features, along with trip attributes, are subsequently merged into the proposed geographic denoising UNet architecture, named GeoUNet, to synthesize geographic trajectories from white noise. Through experimentation across three real-world data settings, ControlTraj demonstrates its ability to produce human-directed, high-fidelity trajectory generation with adaptability to unexplored geographical contexts.
Advances and Open Challenges in Federated Learning with Foundation Models
Authors: Chao Ren, Han Yu, Hongyi Peng, Xiaoli Tang, Anran Li, Yulan Gao, Alysa Ziying Tan, Bo Zhao, Xiaoxiao Li, Zengxiang Li, Qiang Yang
Abstract
The integration of Foundation Models (FMs) with Federated Learning (FL) presents a transformative paradigm in Artificial Intelligence (AI), offering enhanced capabilities while addressing concerns of privacy, data decentralization, and computational efficiency. This paper provides a comprehensive survey of the emerging field of Federated Foundation Models (FedFM), elucidating their synergistic relationship and exploring novel methodologies, challenges, and future directions that the FL research field needs to focus on in order to thrive in the age of foundation models. A systematic multi-tiered taxonomy is proposed, categorizing existing FedFM approaches for model training, aggregation, trustworthiness, and incentivization. Key challenges, including how to enable FL to deal with high complexity of computational demands, privacy considerations, contribution evaluation, and communication efficiency, are thoroughly discussed. Moreover, the paper explores the intricate challenges of communication, scalability and security inherent in training/fine-tuning FMs via FL, highlighting the potential of quantum computing to revolutionize the training, inference, optimization and data encryption processes. This survey underscores the importance of further research to propel innovation in FedFM, emphasizing the need for developing trustworthy solutions. It serves as a foundational guide for researchers and practitioners interested in contributing to this interdisciplinary and rapidly advancing field.
Introduction to Eye Tracking: A Hands-On Tutorial for Students and Practitioners
Authors: Enkelejda Kasneci, Hong Gao, Suleyman Ozdel, Virmarie Maquiling, Enkeleda Thaqi, Carrie Lau, Yao Rong, Gjergji Kasneci, Efe Bozkir
Abstract
Eye-tracking technology is widely used in various application areas such as psychology, neuroscience, marketing, and human-computer interaction, as it is a valuable tool for understanding how people process information and interact with their environment. This tutorial provides a comprehensive introduction to eye tracking, from the basics of eye anatomy and physiology to the principles and applications of different eye-tracking systems. The guide is designed to provide a hands-on learning experience for everyone interested in working with eye-tracking technology. Therefore, we include practical case studies to teach students and professionals how to effectively set up and operate an eye-tracking system. The tutorial covers a variety of eye-tracking systems, calibration techniques, data collection, and analysis methods, including fixations, saccades, pupil diameter, and visual scan path analysis. In addition, we emphasize the importance of considering ethical aspects when conducting eye-tracking research and experiments, especially informed consent and participant privacy. We aim to give the reader a solid understanding of basic eye-tracking principles and the practical skills needed to conduct their experiments. Python-based code snippets and illustrative examples are included in the tutorials and can be downloaded at: https://gitlab.lrz.de/hctl/Eye-Tracking-Tutorial.
SMI-5: Five Dimensions of Social Media Interaction for Platform (De)Centralization
Authors: Lynnette Hui Xian Ng, Samantha C. Phillips, Kathleen M. Carley
Abstract
Web 3.0 focuses on the decentralization of the internet and creating a system of interconnected and independent computers for improved privacy and security. We extend the idea of the decentralization of the web to the social media space: whereby we ask: in the context of the social media space, what does "decentralization" mean? Does decentralization of social media affect user interactions? We put forth the notion that decentralization in the social media does not solely take place on the physical network level, but can be compartmentalized across the entire social media stack. This paper puts forth SMI-5: the five dimensions of social media interaction for describing the (de)centralization of social platforms. We then illustrate a case study that the user interactions differ based on the slices of the SMI layer analyzed, highlighting the importance of understanding the (de)centralization of social media platforms from an a more encompassing perspective rather than only the physical network.
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Authors: Liang Qu, Cunze Wang, Yuhui Shi
Subjects: Machine Learning (cs.LG); Image and Video Processing (eess.IV)
Abstract
The application of deep learning techniques to medical problems has garnered widespread research interest in recent years, such as applying convolutional neural networks to medical image classification tasks. However, data in the medical field is often highly private, preventing different hospitals from sharing data to train an accurate model. Federated learning, as a privacy-preserving machine learning architecture, has shown promising performance in balancing data privacy and model utility by keeping private data on the client's side and using a central server to coordinate a set of clients for model training through aggregating their uploaded model parameters. Yet, this architecture heavily relies on a trusted third-party server, which is challenging to achieve in real life. Swarm learning, as a specialized decentralized federated learning architecture that does not require a central server, utilizes blockchain technology to enable direct parameter exchanges between clients. However, the mining of blocks requires significant computational resources, limiting its scalability. To address this issue, this paper integrates the brain storm optimization algorithm into the swarm learning framework, named BSO-SL. This approach clusters similar clients into different groups based on their model distributions. Additionally, leveraging the architecture of BSO, clients are given the probability to engage in collaborative learning both within their cluster and with clients outside their cluster, preventing the model from converging to local optima. The proposed method has been validated on a real-world diabetic retinopathy image classification dataset, and the experimental results demonstrate the effectiveness of the proposed approach.
Federated Learning with Only Positive Labels by Exploring Label Correlations
Authors: Xuming An, Dui Wang, Li Shen, Yong Luo, Han Hu, Bo Du, Yonggang Wen, Dacheng Tao
Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR)
Abstract
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints. In this paper, we study the multi-label classification problem under the federated learning setting, where trivial solution and extremely poor performance may be obtained, especially when only positive data w.r.t. a single class label are provided for each client. This issue can be addressed by adding a specially designed regularizer on the server-side. Although effective sometimes, the label correlations are simply ignored and thus sub-optimal performance may be obtained. Besides, it is expensive and unsafe to exchange user's private embeddings between server and clients frequently, especially when training model in the contrastive way. To remedy these drawbacks, we propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC). Specifically, FedALC estimates the label correlations in the class embedding learning for different label pairs and utilizes it to improve the model training. To further improve the safety and also reduce the communication overhead, we propose a variant to learn fixed class embedding for each client, so that the server and clients only need to exchange class embeddings once. Extensive experiments on multiple popular datasets demonstrate that our FedALC can significantly outperform existing counterparts.
Non-Fungible Programs: Private Full-Stack Applications for Web3
Abstract
The greatest advantage that Web3 applications offer over Web 2.0 is the evolution of the data access layer. Opaque, centralized services that compelled trust from users are replaced by trustless, decentralized systems of smart contracts. However, the public nature of blockchain-based databases, on which smart contracts transact, has typically presented a challenge for applications that depend on data privacy or that rely on participants having incomplete information. This has changed with the introduction of confidential smart contract networks that encrypt the memory state of active contracts as well as their databases stored on-chain. With confidentiality, contracts can more readily implement novel interaction mechanisms that were previously infeasible. Meanwhile, in both Web 2.0 and Web3 applications the user interface continues to play a crucial role in translating user intent into actionable requests. In many cases, developers have shifted intelligence and autonomy into the client-side, leveraging Web technologies for compute, graphics, and networking. Web3's reliance on such frontends has revealed a pain point though, namely that decentralized applications are not accessible to end users without a persistent host serving the application. Here we introduce the Non-Fungible Program (NFP) model for developing self-contained frontend applications that are distributed via blockchain, powered by Web technology, and backed by private databases persisted in encrypted smart contracts. Access to frontend code, as well as backend services, is controlled and guaranteed by smart contracts according to the NFT ownership model, eliminating the need for a separate host. By extension, NFP applications bring interactivity to token owners and enable new functionalities, such as authorization mechanisms for oracles, supplementary Web services, and overlay networks in a secure manner. In addition...
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
Abstract
The concept of differential privacy (DP) can quantitatively measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset. The DP, which is generally used as a constraint, has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google. A common methodology for guaranteeing DP is incorporating appropriate noise into query outputs, thereby establishing statistical defense systems against privacy attacks such as membership inference and linkage attacks. However, especially for small datasets, existing DP mechanisms occasionally add excessive amount of noise to query output, thereby discarding data utility. This is because the traditional DP computes privacy loss based on the worst-case scenario, i.e., statistical outliers. In this work, to tackle this challenge, we utilize per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances. In a nutshell, we propose a per-instance noise variance optimization (NVO) game, framed as a common interest sequential game, and show that the Nash equilibrium (NE) points of it inherently guarantee pDP for all data instances. Through extensive experiments, our proposed pDP algorithm demonstrated an average performance improvement of up to 99.53% compared to the conventional DP algorithm in terms of KL divergence.
APACHE: A Processing-Near-Memory Architecture for Multi-Scheme Fully Homomorphic Encryption
Authors: Lin Ding, Song Bian, Penggao He, Yan Xu, Gang Qu, Jiliang Zhang
Abstract
Fully Homomorphic Encryption (FHE) allows one to outsource computation over encrypted data to untrusted servers without worrying about data breaching. Since FHE is known to be extremely computationally-intensive, application-specific accelerators emerged as a powerful solution to narrow the performance gap. Nonetheless, due to the increasing complexities in FHE schemes per se and multi-scheme FHE algorithm designs in end-to-end privacy-preserving tasks, existing FHE accelerators often face the challenges of low hardware utilization rates and insufficient memory bandwidth. In this work, we present APACHE, a layered near-memory computing hierarchy tailored for multi-scheme FHE acceleration. By closely inspecting the data flow across different FHE schemes, we propose a layered near-memory computing architecture with fine-grained functional unit design to significantly enhance the utilization rates of both computational resources and memory bandwidth. In addition, we propose a multi-scheme operator compiler to efficiently schedule high-level FHE computations across lower-level functional units. In the experiment, we evaluate APACHE on various FHE applications, such as Lola MNIST, HELR, fully-packed bootstrapping, and fully homomorphic processors. The results illustrate that APACHE outperforms the state-of-the-art ASIC FHE accelerators by 2.4x to 19.8x over a variety of operator and application benchmarks.
SynthEval: A Framework for Detailed Utility and Privacy Evaluation of Tabular Synthetic Data
Authors: Anton Danholt Lautrup, Tobias Hyrup, Arthur Zimek, Peter Schneider-Kamp
Abstract
With the growing demand for synthetic data to address contemporary issues in machine learning, such as data scarcity, data fairness, and data privacy, having robust tools for assessing the utility and potential privacy risks of such data becomes crucial. SynthEval, a novel open-source evaluation framework distinguishes itself from existing tools by treating categorical and numerical attributes with equal care, without assuming any special kind of preprocessing steps. This~makes it applicable to virtually any synthetic dataset of tabular records. Our tool leverages statistical and machine learning techniques to comprehensively evaluate synthetic data fidelity and privacy-preserving integrity. SynthEval integrates a wide selection of metrics that can be used independently or in highly customisable benchmark configurations, and can easily be extended with additional metrics. In this paper, we describe SynthEval and illustrate its versatility with examples. The framework facilitates better benchmarking and more consistent comparisons of model capabilities.
Secure and Privacy-Preserving Authentication for Data Subject Rights Enforcement
Abstract
In light of the GDPR, data controllers (DC) need to allow data subjects (DS) to exercise certain data subject rights. A key requirement here is that DCs can reliably authenticate a DS. Due to a lack of clear technical specifications, this has been realized in different ways, such as by requesting copies of ID documents or by email address verification. However, previous research has shown that this is associated with various security and privacy risks and that identifying DSs can be a non-trivial task. In this paper, we review different authentication schemes and propose an architecture that enables DCs to authenticate DSs with the help of independent Identity Providers in a secure and privacy-preserving manner by utilizing attribute-based credentials and eIDs. Our work contributes to a more standardized and privacy-preserving way of authenticating DSs, which will benefit both DCs and DSs.
Privacy-Preserving Billing for Local Energy Markets (Long Version)
Abstract
We propose a privacy-preserving billing protocol for local energy markets (PBP-LEMs) that takes into account market participants' energy volume deviations from their bids. PBP-LEMs enables a group of market entities to jointly compute participants' bills in a decentralized and privacy-preserving manner without sacrificing correctness. It also mitigates risks on individuals' privacy arising from any potential internal collusion. We first propose a novel, efficient, and privacy-preserving individual billing scheme, achieving information-theoretic security, which serves as a building block. PBP-LEMs utilizes this scheme, along with other techniques such as multiparty computation, Pedersen commitments and inner product functional encryption, to ensure data confidentiality and accuracy. Additionally, we present three approaches, resulting in different levels of privacy and performance. We prove that the protocol meets its security and privacy requirements and is feasible for deployment in real LEMs. Our analysis also shows variations in overall performance and identifies areas where overhead is concentrated based on the applied approach.
An Element-Wise Weights Aggregation Method for Federated Learning
Abstract
Federated learning (FL) is a powerful Machine Learning (ML) paradigm that enables distributed clients to collaboratively learn a shared global model while keeping the data on the original device, thereby preserving privacy. A central challenge in FL is the effective aggregation of local model weights from disparate and potentially unbalanced participating clients. Existing methods often treat each client indiscriminately, applying a single proportion to the entire local model. However, it is empirically advantageous for each weight to be assigned a specific proportion. This paper introduces an innovative Element-Wise Weights Aggregation Method for Federated Learning (EWWA-FL) aimed at optimizing learning performance and accelerating convergence speed. Unlike traditional FL approaches, EWWA-FL aggregates local weights to the global model at the level of individual elements, thereby allowing each participating client to make element-wise contributions to the learning process. By taking into account the unique dataset characteristics of each client, EWWA-FL enhances the robustness of the global model to different datasets while also achieving rapid convergence. The method is flexible enough to employ various weighting strategies. Through comprehensive experiments, we demonstrate the advanced capabilities of EWWA-FL, showing significant improvements in both accuracy and convergence speed across a range of backbones and benchmarks.
BeSound: Bluetooth-Based Position Estimation Enhancing with Cross-Modality Distillation
Authors: Hymalai Bello, Sungho Suh, Bo Zhou, Paul Lukowicz
Subjects: Machine Learning (cs.LG); Signal Processing (eess.SP)
Abstract
Smart factories leverage advanced technologies to optimize manufacturing processes and enhance efficiency. Implementing worker tracking systems, primarily through camera-based methods, ensures accurate monitoring. However, concerns about worker privacy and technology protection make it necessary to explore alternative approaches. We propose a non-visual, scalable solution using Bluetooth Low Energy (BLE) and ultrasound coordinates. BLE position estimation offers a very low-power and cost-effective solution, as the technology is available on smartphones and is scalable due to the large number of smartphone users, facilitating worker localization and safety protocol transmission. Ultrasound signals provide faster response times and higher accuracy but require custom hardware, increasing costs. To combine the benefits of both modalities, we employ knowledge distillation (KD) from ultrasound signals to BLE RSSI data. Once the student model is trained, the model only takes as inputs the BLE-RSSI data for inference, retaining the advantages of ubiquity and low cost of BLE RSSI. We tested our approach using data from an experiment with twelve participants in a smart factory test bed environment. We obtained an increase of 11.79% in the F1-score compared to the baseline (target model without KD and trained with BLE-RSSI data only).
Keyword: machine learning
An Optimized Framework for Processing Large-scale Polysomnographic Data Incorporating Expert Human Oversight
Authors: Benedikt Holm, Gabriel Jouan, Emil Hardarson, Sigríður Sigurðardottir, Kenan Hoelke, Conor Murphy, Erna Sif Arnardóttir, María Óskarsdóttir, Anna Sigríður Islind
Abstract
Polysomnographic recordings are essential for diagnosing many sleep disorders, yet their detailed analysis presents considerable challenges. With the rise of machine learning methodologies, researchers have created various algorithms to automatically score and extract clinically relevant features from polysomnography, but less research has been devoted to how exactly the algorithms should be incorporated into the workflow of sleep technologists. This paper presents a sophisticated data collection platform developed under the Sleep Revolution project, to harness polysomnographic data from multiple European centers. A tripartite platform is presented: a user-friendly web platform for uploading three-night polysomnographic recordings, a dedicated splitter that segments these into individual one-night recordings, and an advanced processor that enhances the one-night polysomnography with contemporary automatic scoring algorithms. The platform is evaluated using real-life data and human scorers, whereby scoring time, accuracy and trust are quantified. Additionally, the scorers were interviewed about their trust in the platform, along with the impact of its integration into their workflow.
Using Large Language Models to Enrich the Documentation of Datasets for Machine Learning
Authors: Joan Giner-Miguelez, Abel Gómez, Jordi Cabot
Subjects: Digital Libraries (cs.DL); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Abstract
Recent regulatory initiatives like the European AI Act and relevant voices in the Machine Learning (ML) community stress the need to describe datasets along several key dimensions for trustworthy AI, such as the provenance processes and social concerns. However, this information is typically presented as unstructured text in accompanying documentation, hampering their automated analysis and processing. In this work, we explore using large language models (LLM) and a set of prompting strategies to automatically extract these dimensions from documents and enrich the dataset description with them. Our approach could aid data publishers and practitioners in creating machine-readable documentation to improve the discoverability of their datasets, assess their compliance with current AI regulations, and improve the overall quality of ML models trained on them. In this paper, we evaluate the approach on 12 scientific dataset papers published in two scientific journals (Nature's Scientific Data and Elsevier's Data in Brief) using two different LLMs (GPT3.5 and Flan-UL2). Results show good accuracy with our prompt extraction strategies. Concrete results vary depending on the dimensions, but overall, GPT3.5 shows slightly better accuracy (81,21%) than FLAN-UL2 (69,13%) although it is more prone to hallucinations. We have released an open-source tool implementing our approach and a replication package, including the experiments' code and results, in an open-source repository.
Feature Distribution Shift Mitigation with Contrastive Pretraining for Intrusion Detection
Authors: Weixing Wang, Haojin Yang, Christoph Meinel, Hasan Yagiz Özkan, Cristian Bermudez Serna, Carmen Mas-Machuca
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Networking and Internet Architecture (cs.NI)
Abstract
In recent years, there has been a growing interest in using Machine Learning (ML), especially Deep Learning (DL) to solve Network Intrusion Detection (NID) problems. However, the feature distribution shift problem remains a difficulty, because the change in features' distributions over time negatively impacts the model's performance. As one promising solution, model pretraining has emerged as a novel training paradigm, which brings robustness against feature distribution shift and has proven to be successful in Computer Vision (CV) and Natural Language Processing (NLP). To verify whether this paradigm is beneficial for NID problem, we propose SwapCon, a ML model in the context of NID, which compresses shift-invariant feature information during the pretraining stage and refines during the finetuning stage. We exemplify the evidence of feature distribution shift using the Kyoto2006+ dataset. We demonstrate how pretraining a model with the proper size can increase robustness against feature distribution shifts by over 8%. Moreover, we show how an adequate numerical embedding strategy also enhances the performance of pretrained models. Further experiments show that the proposed SwapCon model also outperforms eXtreme Gradient Boosting (XGBoost) and K-Nearest Neighbor (KNN) based models by a large margin.
ML-based identification of the interface regions for coupling local and nonlocal models
Authors: Noujoud Nader, Patrick Diehl, Marta D'Elia, Christian Glusa, Serge Prudhomme
Abstract
Local-nonlocal coupling approaches combine the computational efficiency of local models and the accuracy of nonlocal models. However, the coupling process is challenging, requiring expertise to identify the interface between local and nonlocal regions. This study introduces a machine learning-based approach to automatically detect the regions in which the local and nonlocal models should be used in a coupling approach. This identification process uses the loading functions and provides as output the selected model at the grid points. Training is based on datasets of loading functions for which reference coupling configurations are computed using accurate coupled solutions, where accuracy is measured in terms of the relative error between the solution to the coupling approach and the solution to the nonlocal model. We study two approaches that differ from one another in terms of the data structure. The first approach, referred to as the full-domain input data approach, inputs the full load vector and outputs a full label vector. In this case, the classification process is carried out globally. The second approach consists of a window-based approach, where loads are preprocessed and partitioned into windows and the problem is formulated as a node-wise classification approach in which the central point of each window is treated individually. The classification problems are solved via deep learning algorithms based on convolutional neural networks. The performance of these approaches is studied on one-dimensional numerical examples using F1-scores and accuracy metrics. In particular, it is shown that the windowing approach provides promising results, achieving an accuracy of 0.96 and an F1-score of 0.97. These results underscore the potential of the approach to automate coupling processes, leading to more accurate and computationally efficient solutions for material science applications.
Naïve Bayes and Random Forest for Crop Yield Prediction
Authors: Abbas Maazallahi, Sreehari Thota, Naga Prasad Kondaboina, Vineetha Muktineni, Deepthi Annem, Abhi Stephen Rokkam, Mohammad Hossein Amini, Mohammad Amir Salari, Payam Norouzzadeh, Eli Snir, Bahareh Rahmani
Abstract
This study analyzes crop yield prediction in India from 1997 to 2020, focusing on various crops and key environmental factors. It aims to predict agricultural yields by utilizing advanced machine learning techniques like Linear Regression, Decision Tree, KNN, Na\"ive Bayes, K-Mean Clustering, and Random Forest. The models, particularly Na\"ive Bayes and Random Forest, demonstrate high effectiveness, as shown through data visualizations. The research concludes that integrating these analytical methods significantly enhances the accuracy and reliability of crop yield predictions, offering vital contributions to agricultural data science.
Training all-mechanical neural networks for task learning through in situ backpropagation
Abstract
Recent advances unveiled physical neural networks as promising machine learning platforms, offering faster and more energy-efficient information processing. Compared with extensively-studied optical neural networks, the development of mechanical neural networks (MNNs) remains nascent and faces significant challenges, including heavy computational demands and learning with approximate gradients. Here, we introduce the mechanical analogue of in situ backpropagation to enable highly efficient training of MNNs. We demonstrate that the exact gradient can be obtained locally in MNNs, enabling learning through their immediate vicinity. With the gradient information, we showcase the successful training of MNNs for behavior learning and machine learning tasks, achieving high accuracy in regression and classification. Furthermore, we present the retrainability of MNNs involving task-switching and damage, demonstrating the resilience. Our findings, which integrate the theory for training MNNs and experimental and numerical validations, pave the way for mechanical machine learning hardware and autonomous self-learning material systems.
Drop-Connect as a Fault-Tolerance Approach for RRAM-based Deep Neural Network Accelerators
Authors: Mingyuan Xiang, Xuhan Xie, Pedro Savarese, Xin Yuan, Michael Maire, Yanjing Li
Abstract
Resistive random-access memory (RRAM) is widely recognized as a promising emerging hardware platform for deep neural networks (DNNs). Yet, due to manufacturing limitations, current RRAM devices are highly susceptible to hardware defects, which poses a significant challenge to their practical applicability. In this paper, we present a machine learning technique that enables the deployment of defect-prone RRAM accelerators for DNN applications, without necessitating modifying the hardware, retraining of the neural network, or implementing additional detection circuitry/logic. The key idea involves incorporating a drop-connect inspired approach during the training phase of a DNN, where random subsets of weights are selected to emulate fault effects (e.g., set to zero to mimic stuck-at-1 faults), thereby equipping the DNN with the ability to learn and adapt to RRAM defects with the corresponding fault rates. Our results demonstrate the viability of the drop-connect approach, coupled with various algorithm and system-level design and trade-off considerations. We show that, even in the presence of high defect rates (e.g., up to 30%), the degradation of DNN accuracy can be as low as less than 1% compared to that of the fault-free version, while incurring minimal system-level runtime/energy costs.
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Authors: Liang Qu, Cunze Wang, Yuhui Shi
Subjects: Machine Learning (cs.LG); Image and Video Processing (eess.IV)
Abstract
The application of deep learning techniques to medical problems has garnered widespread research interest in recent years, such as applying convolutional neural networks to medical image classification tasks. However, data in the medical field is often highly private, preventing different hospitals from sharing data to train an accurate model. Federated learning, as a privacy-preserving machine learning architecture, has shown promising performance in balancing data privacy and model utility by keeping private data on the client's side and using a central server to coordinate a set of clients for model training through aggregating their uploaded model parameters. Yet, this architecture heavily relies on a trusted third-party server, which is challenging to achieve in real life. Swarm learning, as a specialized decentralized federated learning architecture that does not require a central server, utilizes blockchain technology to enable direct parameter exchanges between clients. However, the mining of blocks requires significant computational resources, limiting its scalability. To address this issue, this paper integrates the brain storm optimization algorithm into the swarm learning framework, named BSO-SL. This approach clusters similar clients into different groups based on their model distributions. Additionally, leveraging the architecture of BSO, clients are given the probability to engage in collaborative learning both within their cluster and with clients outside their cluster, preventing the model from converging to local optima. The proposed method has been validated on a real-world diabetic retinopathy image classification dataset, and the experimental results demonstrate the effectiveness of the proposed approach.
A Survey of Deep Long-Tail Classification Advancements
Authors: Charika de Alvis (The University of Sydney, Australia), Suranga Seneviratne (The University of Sydney, Australia)
Abstract
Many data distributions in the real world are hardly uniform. Instead, skewed and long-tailed distributions of various kinds are commonly observed. This poses an interesting problem for machine learning, where most algorithms assume or work well with uniformly distributed data. The problem is further exacerbated by current state-of-the-art deep learning models requiring large volumes of training data. As such, learning from imbalanced data remains a challenging research problem and a problem that must be solved as we move towards more real-world applications of deep learning. In the context of class imbalance, state-of-the-art (SOTA) accuracies on standard benchmark datasets for classification typically fall less than 75%, even for less challenging datasets such as CIFAR100. Nonetheless, there has been progress in this niche area of deep learning. To this end, in this survey, we provide a taxonomy of various methods proposed for addressing the problem of long-tail classification, focusing on works that happened in the last few years under a single mathematical framework. We also discuss standard performance metrics, convergence studies, feature distribution and classifier analysis. We also provide a quantitative comparison of the performance of different SOTA methods and conclude the survey by discussing the remaining challenges and future research direction.
MISLEAD: Manipulating Importance of Selected features for Learning Epsilon in Evasion Attack Deception
Abstract
Emerging vulnerabilities in machine learning (ML) models due to adversarial attacks raise concerns about their reliability. Specifically, evasion attacks manipulate models by introducing precise perturbations to input data, causing erroneous predictions. To address this, we propose a methodology combining SHapley Additive exPlanations (SHAP) for feature importance analysis with an innovative Optimal Epsilon technique for conducting evasion attacks. Our approach begins with SHAP-based analysis to understand model vulnerabilities, crucial for devising targeted evasion strategies. The Optimal Epsilon technique, employing a Binary Search algorithm, efficiently determines the minimum epsilon needed for successful evasion. Evaluation across diverse machine learning architectures demonstrates the technique's precision in generating adversarial samples, underscoring its efficacy in manipulating model outcomes. This study emphasizes the critical importance of continuous assessment and monitoring to identify and mitigate potential security risks in machine learning systems.
Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems
Abstract
Progress in machine learning and artificial intelligence has spurred the widespread adoption of automated decision systems (ADS). An extensive literature explores what conditions must be met for these systems' decisions to be fair. However, questions of legitimacy -- why those in control of ADS are entitled to make such decisions -- have received comparatively little attention. This paper shows that when such questions are raised theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such as fairness, accuracy, expertise or efficiency. In search of better theories, we conduct a critical analysis of the philosophical literature on the legitimacy of the state, focusing on consent, public reason, and democratic authorisation. This analysis reveals that the prevailing understanding of legitimacy in analytical political philosophy is also ill-suited to the task of establishing whether and when ADS are legitimate. The paper thus clarifies expectations for theories of ADS legitimacy and charts a path for a future research programme on the topic.
Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models
Authors: Elijah Pelofske, Vincent Urias, Lorie M. Liebrock
Abstract
Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-h, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version of the localGPT framework and langchain to provide word embedding context of the full source code and header files to the model, resulting in over 130,000 function re-write GPT output text blocks, approximately 40,000 of which were able to be parsed as C code and subsequently compiled. The generated code is analyzed for being compilable, correctness of the algorithm, memory leaks, compiler optimization stability, and character distance to the reference implementation. Remarkably, several generated function variants have a high implementation security risk of being correct for some test vectors, but incorrect for other test vectors. Additionally, many function implementations were not correct to the reference algorithm of SHA-1, but produced hashes that have some of the basic characteristics of hash functions. Many of the function re-writes contained serious flaws such as memory leaks, integer overflows, out of bounds accesses, use of uninitialised values, and compiler optimization instability. Compiler optimization settings and SHA-256 hash checksums of the compiled binaries are used to cluster implementations that are equivalent but may not have identical syntax - using this clustering over 100,000 novel and correct versions of the SHA-1 codebase were generated where each component C function of the reference implementation is different from the original code.
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
Abstract
The concept of differential privacy (DP) can quantitatively measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset. The DP, which is generally used as a constraint, has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google. A common methodology for guaranteeing DP is incorporating appropriate noise into query outputs, thereby establishing statistical defense systems against privacy attacks such as membership inference and linkage attacks. However, especially for small datasets, existing DP mechanisms occasionally add excessive amount of noise to query output, thereby discarding data utility. This is because the traditional DP computes privacy loss based on the worst-case scenario, i.e., statistical outliers. In this work, to tackle this challenge, we utilize per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances. In a nutshell, we propose a per-instance noise variance optimization (NVO) game, framed as a common interest sequential game, and show that the Nash equilibrium (NE) points of it inherently guarantee pDP for all data instances. Through extensive experiments, our proposed pDP algorithm demonstrated an average performance improvement of up to 99.53% compared to the conventional DP algorithm in terms of KL divergence.
Exploring Machine Learning Algorithms for Infection Detection Using GC-IMS Data: A Preliminary Study
Abstract
The developing field of enhanced diagnostic techniques in the diagnosis of infectious diseases, constitutes a crucial domain in modern healthcare. By utilizing Gas Chromatography-Ion Mobility Spectrometry (GC-IMS) data and incorporating machine learning algorithms into one platform, our research aims to tackle the ongoing issue of precise infection identification. Inspired by these difficulties, our goals consist of creating a strong data analytics process, enhancing machine learning (ML) models, and performing thorough validation for clinical applications. Our research contributes to the emerging field of advanced diagnostic technologies by integrating Gas Chromatography-Ion Mobility Spectrometry (GC-IMS) data and machine learning algorithms within a unified Laboratory Information Management System (LIMS) platform. Preliminary trials demonstrate encouraging levels of accuracy when employing various ML algorithms to differentiate between infected and non-infected samples. Continuing endeavors are currently concentrated on enhancing the effectiveness of the model, investigating techniques to clarify its functioning, and incorporating many types of data to further support the early detection of diseases.
SynthEval: A Framework for Detailed Utility and Privacy Evaluation of Tabular Synthetic Data
Authors: Anton Danholt Lautrup, Tobias Hyrup, Arthur Zimek, Peter Schneider-Kamp
Abstract
With the growing demand for synthetic data to address contemporary issues in machine learning, such as data scarcity, data fairness, and data privacy, having robust tools for assessing the utility and potential privacy risks of such data becomes crucial. SynthEval, a novel open-source evaluation framework distinguishes itself from existing tools by treating categorical and numerical attributes with equal care, without assuming any special kind of preprocessing steps. This~makes it applicable to virtually any synthetic dataset of tabular records. Our tool leverages statistical and machine learning techniques to comprehensively evaluate synthetic data fidelity and privacy-preserving integrity. SynthEval integrates a wide selection of metrics that can be used independently or in highly customisable benchmark configurations, and can easily be extended with additional metrics. In this paper, we describe SynthEval and illustrate its versatility with examples. The framework facilitates better benchmarking and more consistent comparisons of model capabilities.
A Configurable and Efficient Memory Hierarchy for Neural Network Hardware Accelerator
Authors: Oliver Bause, Paul Palomero Bernardo, Oliver Bringmann
Abstract
As machine learning applications continue to evolve, the demand for efficient hardware accelerators, specifically tailored for deep neural networks (DNNs), becomes increasingly vital. In this paper, we propose a configurable memory hierarchy framework tailored for per layer adaptive memory access patterns of DNNs. The hierarchy requests data on-demand from the off-chip memory to provide it to the accelerator's compute units. The objective is to strike an optimized balance between minimizing the required memory capacity and maintaining high accelerator performance. The framework is characterized by its configurability, allowing the creation of a tailored memory hierarchy with up to five levels. Furthermore, the framework incorporates an optional shift register as final level to increase the flexibility of the memory management process. A comprehensive loop-nest analysis of DNN layers shows that the framework can efficiently execute the access patterns of most loop unrolls. Synthesis results and a case study of the DNN accelerator UltraTrail indicate a possible reduction in chip area of up to 62.2% as smaller memory modules can be used. At the same time, the performance loss can be minimized to 2.4%.
OpTC -- A Toolchain for Deployment of Neural Networks on AURIX TC3xx Microcontrollers
Authors: Christian Heidorn, Frank Hannig, Dominik Riedelbauch, Christoph Strohmeyer, Jürgen Teich
Abstract
The AURIX 2xx and 3xx families of TriCore microcontrollers are widely used in the automotive industry and, recently, also in applications that involve machine learning tasks. Yet, these applications are mainly engineered manually, and only little tool support exists for bringing neural networks to TriCore microcontrollers. Thus, we propose OpTC, an end-to-end toolchain for automatic compression, conversion, code generation, and deployment of neural networks on TC3xx microcontrollers. OpTC supports various types of neural networks and provides compression using layer-wise pruning based on sensitivity analysis for a given neural network. The flexibility in supporting different types of neural networks, such as multi-layer perceptrons (MLP), convolutional neural networks (CNN), and recurrent neural networks (RNN), is shown in case studies for a TC387 microcontroller. Automotive applications for predicting the temperature in electric motors and detecting anomalies are thereby used to demonstrate the effectiveness and the wide range of applications supported by OpTC.
Employing Two-Dimensional Word Embedding for Difficult Tabular Data Stream Classification
Abstract
Rapid technological advances are inherently linked to the increased amount of data, a substantial portion of which can be interpreted as data stream, capable of exhibiting the phenomenon of concept drift and having a high imbalance ratio. Consequently, developing new approaches to classifying difficult data streams is a rapidly growing research area. At the same time, the proliferation of deep learning and transfer learning, as well as the success of convolutional neural networks in computer vision tasks, have contributed to the emergence of a new research trend, namely Multi-Dimensional Encoding (MDE), focusing on transforming tabular data into a homogeneous form of a discrete digital signal. This paper proposes Streaming Super Tabular Machine Learning (SSTML), thereby exploring for the first time the potential of MDE in the difficult data stream classification task. SSTML encodes consecutive data chunks into an image representation using the STML algorithm and then performs a single ResNet-18 training epoch. Experiments conducted on synthetic and real data streams have demonstrated the ability of SSTML to achieve classification quality statistically significantly superior to state-of-the-art algorithms while maintaining comparable processing time.
Near-Optimal Wafer-Scale Reduce
Authors: Piotr Luczynski, Lukas Gianinazzi, Patrick Iff, Leighton Wilson, Daniele De Sensi, Torsten Hoefler
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Performance (cs.PF)
Abstract
Efficient Reduce and AllReduce communication collectives are a critical cornerstone of high-performance computing (HPC) applications. We present the first systematic investigation of Reduce and AllReduce on the Cerebras Wafer-Scale Engine (WSE). This architecture has been shown to achieve unprecedented performance both for machine learning workloads and other computational problems like FFT. We introduce a performance model to estimate the execution time of algorithms on the WSE and validate our predictions experimentally for a wide range of input sizes. In addition to existing implementations, we design and implement several new algorithms specifically tailored to the architecture. Moreover, we establish a lower bound for the runtime of a Reduce operation on the WSE. Based on our model, we automatically generate code that achieves near-optimal performance across the whole range of input sizes. Experiments demonstrate that our new Reduce and AllReduce algorithms outperform the current vendor solution by up to 3.27x. Additionally, our model predicts performance with less than 4% error. The proposed communication collectives increase the range of HPC applications that can benefit from the high throughput of the WSE. Our model-driven methodology demonstrates a disciplined approach that can lead the way to further algorithmic advancements on wafer-scale architectures.
ST-MambaSync: The Confluence of Mamba Structure and Spatio-Temporal Transformers for Precipitous Traffic Prediction
Authors: Zhiqi Shao, Xusheng Yao, Ze Wang, Junbin Gao
Abstract
Balancing accuracy with computational efficiency is paramount in machine learning, particularly when dealing with high-dimensional data, such as spatial-temporal datasets. This study introduces ST-MambaSync, an innovative framework that integrates a streamlined attention layer with a simplified state-space layer. The model achieves competitive accuracy in spatial-temporal prediction tasks. We delve into the relationship between attention mechanisms and the Mamba component, revealing that Mamba functions akin to attention within a residual network structure. This comparative analysis underpins the efficiency of state-space models, elucidating their capability to deliver superior performance at reduced computational costs.
An Element-Wise Weights Aggregation Method for Federated Learning
Abstract
Federated learning (FL) is a powerful Machine Learning (ML) paradigm that enables distributed clients to collaboratively learn a shared global model while keeping the data on the original device, thereby preserving privacy. A central challenge in FL is the effective aggregation of local model weights from disparate and potentially unbalanced participating clients. Existing methods often treat each client indiscriminately, applying a single proportion to the entire local model. However, it is empirically advantageous for each weight to be assigned a specific proportion. This paper introduces an innovative Element-Wise Weights Aggregation Method for Federated Learning (EWWA-FL) aimed at optimizing learning performance and accelerating convergence speed. Unlike traditional FL approaches, EWWA-FL aggregates local weights to the global model at the level of individual elements, thereby allowing each participating client to make element-wise contributions to the learning process. By taking into account the unique dataset characteristics of each client, EWWA-FL enhances the robustness of the global model to different datasets while also achieving rapid convergence. The method is flexible enough to employ various weighting strategies. Through comprehensive experiments, we demonstrate the advanced capabilities of EWWA-FL, showing significant improvements in both accuracy and convergence speed across a range of backbones and benchmarks.
Generalization Measures for Zero-Shot Cross-Lingual Transfer
Authors: Saksham Bassi, Duygu Ataman, Kyunghyun Cho
Abstract
A model's capacity to generalize its knowledge to interpret unseen inputs with different characteristics is crucial to build robust and reliable machine learning systems. Language model evaluation tasks lack information metrics about model generalization and their applicability in a new setting is measured using task and language-specific downstream performance, which is often lacking in many languages and tasks. In this paper, we explore a set of efficient and reliable measures that could aid in computing more information related to the generalization capability of language models in cross-lingual zero-shot settings. In addition to traditional measures such as variance in parameters after training and distance from initialization, we also measure the effectiveness of sharpness in loss landscape in capturing the success in cross-lingual transfer and propose a novel and stable algorithm to reliably compute the sharpness of a model optimum that correlates to generalization.
Boosting Architectural Generation via Prompts: Report
Abstract
In the realm of AI architectural design, the importance of prompts is becoming increasingly prominent. With advancements in artificial intelligence and large-scale model technology, more design tasks are being delegated to machine learning algorithms. This necessitates a method for designers to guide algorithms in producing their desired designs. Prompts serve as a guiding and motivational mechanism, playing a crucial role in AI-generated architectural design. This paper categorizes and summarizes common vocabulary used in architectural design, discussing how to craft effective prompts and their impact on the quality and creativity of generated results. Through careful prompt design, designers can better control the generated architectural design images, thereby achieving designs that are more aligned with requirements and innovative.
A comprehensive and easy-to-use multi-domain multi-task medical imaging meta-dataset (MedIMeta)
Authors: Stefano Woerner, Arthur Jaques, Christian F. Baumgartner
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
While the field of medical image analysis has undergone a transformative shift with the integration of machine learning techniques, the main challenge of these techniques is often the scarcity of large, diverse, and well-annotated datasets. Medical images vary in format, size, and other parameters and therefore require extensive preprocessing and standardization, for usage in machine learning. Addressing these challenges, we introduce the Medical Imaging Meta-Dataset (MedIMeta), a novel multi-domain, multi-task meta-dataset. MedIMeta contains 19 medical imaging datasets spanning 10 different domains and encompassing 54 distinct medical tasks, all of which are standardized to the same format and readily usable in PyTorch or other ML frameworks. We perform a technical validation of MedIMeta, demonstrating its utility through fully supervised and cross-domain few-shot learning baselines.
Unimodal and Multimodal Sensor Fusion for Wearable Activity Recognition
Authors: Hymalai Bello
Subjects: Machine Learning (cs.LG); Signal Processing (eess.SP)
Abstract
Combining different sensing modalities with multiple positions helps form a unified perception and understanding of complex situations such as human behavior. Hence, human activity recognition (HAR) benefits from combining redundant and complementary information (Unimodal/Multimodal). Even so, it is not an easy task. It requires a multidisciplinary approach, including expertise in sensor technologies, signal processing, data fusion algorithms, and domain-specific knowledge. This Ph.D. work employs sensing modalities such as inertial, pressure (audio and atmospheric pressure), and textile capacitive sensing for HAR. The scenarios explored are gesture and hand position tracking, facial and head pattern recognition, and body posture and gesture recognition. The selected wearable devices and sensing modalities are fully integrated with machine learning-based algorithms, some of which are implemented in the embedded device, on the edge, and tested in real-time.
Keyword: optimization
On the implementation in Abaqus of the global--local iterative coupling and acceleration techniques
Authors: Omar Bettinotti, Stéphane Guinard, Eric Véron, Pierre Gosselet (LaMcube)
Abstract
This paper presents results and convergence study of the Global--Local Iterative Coupling through the implementation in the commercial software Abaqus making use of the co-simulation engine. A hierarchical modeling and simulation approach is often required to alleviate modeling burdens. Particular focus has been devoted here on convergence acceleration and performance optimization. Two applications in statics with nonlinear material behavior and geometrically nonlinear formulation are considered here: first a holed curved plate under traction with elastic--plastic material, then a pre-stressed bolted joint connecting two plates between each other and subjected to traction load. Three different convergence acceleration techniques are compared in terms of convergence performance and accuracy. An inexact solver strategy is proposed to improve computing time performance. The results show promising results for the coupling technology and constitute a step forward in the availability of non-intrusive multi-scale modeling capabilities for complex structures and assemblies.
Facilitating Human Feedback for GenAI Prompt Optimization
Abstract
This study investigates the optimization of Generative AI (GenAI) systems through human feedback, focusing on how varying feedback mechanisms influence the quality of GenAI outputs. We devised a Human-AI training loop where 32 students, divided into two groups, evaluated AI-generated responses based on a single prompt. One group assessed a single output, while the other compared two outputs. Preliminary results from this small-scale experiment suggest that comparative feedback might encourage more nuanced evaluations, highlighting the potential for improved human-AI collaboration in prompt optimization. Future research with larger samples is recommended to validate these findings and further explore effective feedback strategies for GenAI systems.
Advances and Open Challenges in Federated Learning with Foundation Models
Authors: Chao Ren, Han Yu, Hongyi Peng, Xiaoli Tang, Anran Li, Yulan Gao, Alysa Ziying Tan, Bo Zhao, Xiaoxiao Li, Zengxiang Li, Qiang Yang
Abstract
The integration of Foundation Models (FMs) with Federated Learning (FL) presents a transformative paradigm in Artificial Intelligence (AI), offering enhanced capabilities while addressing concerns of privacy, data decentralization, and computational efficiency. This paper provides a comprehensive survey of the emerging field of Federated Foundation Models (FedFM), elucidating their synergistic relationship and exploring novel methodologies, challenges, and future directions that the FL research field needs to focus on in order to thrive in the age of foundation models. A systematic multi-tiered taxonomy is proposed, categorizing existing FedFM approaches for model training, aggregation, trustworthiness, and incentivization. Key challenges, including how to enable FL to deal with high complexity of computational demands, privacy considerations, contribution evaluation, and communication efficiency, are thoroughly discussed. Moreover, the paper explores the intricate challenges of communication, scalability and security inherent in training/fine-tuning FMs via FL, highlighting the potential of quantum computing to revolutionize the training, inference, optimization and data encryption processes. This survey underscores the importance of further research to propel innovation in FedFM, emphasizing the need for developing trustworthy solutions. It serves as a foundational guide for researchers and practitioners interested in contributing to this interdisciplinary and rapidly advancing field.
Metric3D v2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation
Authors: Mu Hu, Wei Yin, Chi Zhang, Zhipeng Cai, Xiaoxiao Long, Hao Chen, Kaixuan Wang, Gang Yu, Chunhua Shen, Shaojie Shen
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
We introduce Metric3D v2, a geometric foundation model for zero-shot metric depth and surface normal estimation from a single image, which is crucial for metric 3D recovery. While depth and normal are geometrically related and highly complimentary, they present distinct challenges. SoTA monocular depth methods achieve zero-shot generalization by learning affine-invariant depths, which cannot recover real-world metrics. Meanwhile, SoTA normal estimation methods have limited zero-shot performance due to the lack of large-scale labeled data. To tackle these issues, we propose solutions for both metric depth estimation and surface normal estimation. For metric depth estimation, we show that the key to a zero-shot single-view model lies in resolving the metric ambiguity from various camera models and large-scale data training. We propose a canonical camera space transformation module, which explicitly addresses the ambiguity problem and can be effortlessly plugged into existing monocular models. For surface normal estimation, we propose a joint depth-normal optimization module to distill diverse data knowledge from metric depth, enabling normal estimators to learn beyond normal labels. Equipped with these modules, our depth-normal models can be stably trained with over 16 million of images from thousands of camera models with different-type annotations, resulting in zero-shot generalization to in-the-wild images with unseen camera settings. Our method enables the accurate recovery of metric 3D structures on randomly collected internet images, paving the way for plausible single-image metrology. Our project page is at https://JUGGHM.github.io/Metric3Dv2.
Research on OPF control of three-phase four-wire low-voltage distribution network considering uncertainty
Abstract
As power systems become more complex and uncertain, low-voltage distribution networks face numerous challenges, including three-phase imbalances caused by asymmetrical loads and distributed energy resources. We propose a robust stochastic optimization (RSO) based optimal power flow (OPF) control method for three-phase, four-wire low-voltage distribution networks that consider uncertainty to address these issues. Using historical data and deep learning classification methods, the proposed method simulates optimal system behaviour without requiring communication infrastructure. The simulation results verify that the proposed method effectively controls the voltage and current amplitude while minimizing the operational cost and three-phase imbalance within acceptable limits. The proposed method shows promise for managing uncertainties and optimizing performance in low-voltage distribution networks.
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Authors: Liang Qu, Cunze Wang, Yuhui Shi
Subjects: Machine Learning (cs.LG); Image and Video Processing (eess.IV)
Abstract
The application of deep learning techniques to medical problems has garnered widespread research interest in recent years, such as applying convolutional neural networks to medical image classification tasks. However, data in the medical field is often highly private, preventing different hospitals from sharing data to train an accurate model. Federated learning, as a privacy-preserving machine learning architecture, has shown promising performance in balancing data privacy and model utility by keeping private data on the client's side and using a central server to coordinate a set of clients for model training through aggregating their uploaded model parameters. Yet, this architecture heavily relies on a trusted third-party server, which is challenging to achieve in real life. Swarm learning, as a specialized decentralized federated learning architecture that does not require a central server, utilizes blockchain technology to enable direct parameter exchanges between clients. However, the mining of blocks requires significant computational resources, limiting its scalability. To address this issue, this paper integrates the brain storm optimization algorithm into the swarm learning framework, named BSO-SL. This approach clusters similar clients into different groups based on their model distributions. Additionally, leveraging the architecture of BSO, clients are given the probability to engage in collaborative learning both within their cluster and with clients outside their cluster, preventing the model from converging to local optima. The proposed method has been validated on a real-world diabetic retinopathy image classification dataset, and the experimental results demonstrate the effectiveness of the proposed approach.
Decentralized Multi-Agent Trajectory Planning in Dynamic Environments with Spatiotemporal Occupancy Grid Maps
Authors: Siyuan Wu, Gang Chen, Moji Shi, Javier Alonso-Mora
Abstract
This paper proposes a decentralized trajectory planning framework for the collision avoidance problem of multiple micro aerial vehicles (MAVs) in environments with static and dynamic obstacles. The framework utilizes spatiotemporal occupancy grid maps (SOGM), which forecast the occupancy status of neighboring space in the near future, as the environment representation. Based on this representation, we extend the kinodynamic A* and the corridor-constrained trajectory optimization algorithms to efficiently tackle static and dynamic obstacles with arbitrary shapes. Collision avoidance between communicating robots is integrated by sharing planned trajectories and projecting them onto the SOGM. The simulation results show that our method achieves competitive performance against state-of-the-art methods in dynamic environments with different numbers and shapes of obstacles. Finally, the proposed method is validated in real experiments.
MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition
Abstract
Emotion decoding using Electroencephalography (EEG)-based affective brain-computer interfaces represents a significant area within the field of affective computing. In the present study, we propose a novel non-deep transfer learning method, termed as Manifold-based Domain adaptation with Dynamic Distribution (MDDD). The proposed MDDD includes four main modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning. The data undergoes a transformation onto an optimal Grassmann manifold space, enabling dynamic alignment of the source and target domains. This process prioritizes both marginal and conditional distributions according to their significance, ensuring enhanced adaptation efficiency across various types of data. In the classifier learning, the principle of structural risk minimization is integrated to develop robust classification models. This is complemented by dynamic distribution alignment, which refines the classifier iteratively. Additionally, the ensemble learning module aggregates the classifiers obtained at different stages of the optimization process, which leverages the diversity of the classifiers to enhance the overall prediction accuracy. The experimental results indicate that MDDD outperforms traditional non-deep learning methods, achieving an average improvement of 3.54%, and is comparable to deep learning methods. This suggests that MDDD could be a promising method for enhancing the utility and applicability of aBCIs in real-world scenarios.
DPO: Differential reinforcement learning with application to optimal configuration search
Authors: Chandrajit Bajaj, Minh Nguyen
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Optimization and Control (math.OC); Statistics Theory (math.ST)
Abstract
Reinforcement learning (RL) with continuous state and action spaces remains one of the most challenging problems within the field. Most current learning methods focus on integral identities such as value functions to derive an optimal strategy for the learning agent. In this paper, we instead study the dual form of the original RL formulation to propose the first differential RL framework that can handle settings with limited training samples and short-length episodes. Our approach introduces Differential Policy Optimization (DPO), a pointwise and stage-wise iteration method that optimizes policies encoded by local-movement operators. We prove a pointwise convergence estimate for DPO and provide a regret bound comparable with current theoretical works. Such pointwise estimate ensures that the learned policy matches the optimal path uniformly across different steps. We then apply DPO to a class of practical RL problems which search for optimal configurations with Lagrangian rewards. DPO is easy to implement, scalable, and shows competitive results on benchmarking experiments against several popular RL methods.
Dynamic Beam Coverage for Satellite Communications Aided by Movable-Antenna Array
Abstract
Due to the ultra-dense constellation, efficient beam coverage and interference mitigation are crucial to low-earth orbit (LEO) satellite communication systems, while the conventional directional antennas and fixed-position antenna (FPA) arrays both have limited degrees of freedom (DoFs) in beamforming to adapt to the time-varying coverage requirement of terrestrial users. To address this challenge, we propose in this paper utilizing movable antenna (MA) arrays to enhance the satellite beam coverage and interference mitigation. Specifically, given the satellite orbit and the coverage requirement within a specific time interval, the antenna position vector (APV) and antenna weight vector (AWV) of the satellite-mounted MA array are jointly optimized over time to minimize the average signal leakage power to the interference area of the satellite, subject to the constraints of the minimum beamforming gain over the coverage area, the continuous movement of MAs, and the constant modulus of AWV. The corresponding continuous-time decision process for the APV and AWV is first transformed into a more tractable discrete-time optimization problem. Then, an alternating optimization (AO)-based algorithm is developed by iteratively optimizing the APV and AWV, where the successive convex approximation (SCA) technique is utilized to obtain locally optimal solutions during the iterations. Moreover, to further reduce the antenna movement overhead, a low-complexity MA scheme is proposed by using an optimized common APV over all time slots. Simulation results validate that the proposed MA array-aided beam coverage schemes can significantly decrease the interference leakage of the satellite compared to conventional FPA-based schemes, while the low-complexity MA scheme can achieve a performance comparable to the continuous-movement scheme.
MalleTrain: Deep Neural Network Training on Unfillable Supercomputer Nodes
Authors: Xiaolong Ma, Feng Yan, Lei Yang, Ian Foster, Michael E. Papka, Zhengchun Liu, Rajkumar Kettimuthu
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
First-come first-serve scheduling can result in substantial (up to 10%) of transiently idle nodes on supercomputers. Recognizing that such unfilled nodes are well-suited for deep neural network (DNN) training, due to the flexible nature of DNN training tasks, Liu et al. proposed that the re-scaling DNN training tasks to fit gaps in schedules be formulated as a mixed-integer linear programming (MILP) problem, and demonstrated via simulation the potential benefits of the approach. Here, we introduce MalleTrain, a system that provides the first practical implementation of this approach and that furthermore generalizes it by allowing it use even for DNN training applications for which model information is unknown before runtime. Key to this latter innovation is the use of a lightweight online job profiling advisor (JPA) to collect critical scalability information for DNN jobs -- information that it then employs to optimize resource allocations dynamically, in real time. We describe the MalleTrain architecture and present the results of a detailed experimental evaluation on a supercomputer GPU cluster and several representative DNN training workloads, including neural architecture search and hyperparameter optimization. Our results not only confirm the practical feasibility of leveraging idle supercomputer nodes for DNN training but improve significantly on prior results, improving training throughput by up to 22.3\% without requiring users to provide job scalability information.
Abstract
Since Gartner coined the term, Hybrid Transactional and Analytical Processing (HTAP), numerous HTAP databases have been proposed to combine transactions with analytics in order to enable real-time data analytics for various data-intensive applications. HTAP databases typically process the mixed workloads of transactions and analytical queries in a unified system by leveraging both a row store and a column store. As there are different storage architectures and processing techniques to satisfy various requirements of diverse applications, it is critical to summarize the pros and cons of these key techniques. This paper offers a comprehensive survey of HTAP databases. We mainly classify state-of-the-art HTAP databases according to four storage architectures: (a) Primary Row Store and In-Memory Column Store; (b) Distributed Row Store and Column Store Replica; (c) Primary Row Store and Distributed In-Memory Column Store; and (d) Primary Column Store and Delta Row Store. We then review the key techniques in HTAP databases, including hybrid workload processing, data organization, data synchronization, query optimization, and resource scheduling. We also discuss existing HTAP benchmarks. Finally, we provide the research challenges and opportunities for HTAP techniques.
Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models
Authors: Elijah Pelofske, Vincent Urias, Lorie M. Liebrock
Abstract
Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-h, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version of the localGPT framework and langchain to provide word embedding context of the full source code and header files to the model, resulting in over 130,000 function re-write GPT output text blocks, approximately 40,000 of which were able to be parsed as C code and subsequently compiled. The generated code is analyzed for being compilable, correctness of the algorithm, memory leaks, compiler optimization stability, and character distance to the reference implementation. Remarkably, several generated function variants have a high implementation security risk of being correct for some test vectors, but incorrect for other test vectors. Additionally, many function implementations were not correct to the reference algorithm of SHA-1, but produced hashes that have some of the basic characteristics of hash functions. Many of the function re-writes contained serious flaws such as memory leaks, integer overflows, out of bounds accesses, use of uninitialised values, and compiler optimization instability. Compiler optimization settings and SHA-256 hash checksums of the compiled binaries are used to cluster implementations that are equivalent but may not have identical syntax - using this clustering over 100,000 novel and correct versions of the SHA-1 codebase were generated where each component C function of the reference implementation is different from the original code.
Generative Diffusion Model (GDM) for Optimization of Wi-Fi Networks
Authors: Tie Liu, Xuming Fang, Rong He
Subjects: Networking and Internet Architecture (cs.NI)
Abstract
Generative Diffusion Models (GDMs), have made significant strides in modeling complex data distributions across diverse domains. Meanwhile, Deep Reinforcement Learning (DRL) has demonstrated substantial improvements in optimizing Wi-Fi network performance. Wi-Fi optimization problems are highly challenging to model mathematically, and DRL methods can bypass complex mathematical modeling, while GDMs excel in handling complex data modeling. Therefore, combining DRL with GDMs can mutually enhance their capabilities. The current MAC layer access mechanism in Wi-Fi networks is the Distributed Coordination Function (DCF), which dramatically declines in performance with a high number of terminals. In this paper, we apply diffusion models to deep deterministic policy gradient, namely the Deep Diffusion Deterministic Policy (D3PG) algorithm to optimize the Wi-Fi performance. Although such integrations have been explored previously, we are the first to apply it to Wi-Fi network performance optimization. We propose an access mechanism that jointly adjusts the contention window and frame length based on the D3PG algorithm. Through simulations, we have demonstrated that this mechanism significantly outperforms existing Wi-Fi standards in dense Wi-Fi scenarios, maintaining performance even as the number of users sharply increases.
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
Abstract
The concept of differential privacy (DP) can quantitatively measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset. The DP, which is generally used as a constraint, has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google. A common methodology for guaranteeing DP is incorporating appropriate noise into query outputs, thereby establishing statistical defense systems against privacy attacks such as membership inference and linkage attacks. However, especially for small datasets, existing DP mechanisms occasionally add excessive amount of noise to query output, thereby discarding data utility. This is because the traditional DP computes privacy loss based on the worst-case scenario, i.e., statistical outliers. In this work, to tackle this challenge, we utilize per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances. In a nutshell, we propose a per-instance noise variance optimization (NVO) game, framed as a common interest sequential game, and show that the Nash equilibrium (NE) points of it inherently guarantee pDP for all data instances. Through extensive experiments, our proposed pDP algorithm demonstrated an average performance improvement of up to 99.53% compared to the conventional DP algorithm in terms of KL divergence.
Delay-Aware Multi-Agent Reinforcement Learning for Cooperative Adaptive Cruise Control with Model-based Stability Enhancement
Authors: Jiaqi Liu, Ziran Wang, Peng Hang, Jian Sun
Abstract
Cooperative Adaptive Cruise Control (CACC) represents a quintessential control strategy for orchestrating vehicular platoon movement within Connected and Automated Vehicle (CAV) systems, significantly enhancing traffic efficiency and reducing energy consumption. In recent years, the data-driven methods, such as reinforcement learning (RL), have been employed to address this task due to their significant advantages in terms of efficiency and flexibility. However, the delay issue, which often arises in real-world CACC systems, is rarely taken into account by current RL-based approaches. To tackle this problem, we propose a Delay-Aware Multi-Agent Reinforcement Learning (DAMARL) framework aimed at achieving safe and stable control for CACC. We model the entire decision-making process using a Multi-Agent Delay-Aware Markov Decision Process (MADA-MDP) and develop a centralized training with decentralized execution (CTDE) MARL framework for distributed control of CACC platoons. An attention mechanism-integrated policy network is introduced to enhance the performance of CAV communication and decision-making. Additionally, a velocity optimization model-based action filter is incorporated to further ensure the stability of the platoon. Experimental results across various delay conditions and platoon sizes demonstrate that our approach consistently outperforms baseline methods in terms of platoon safety, stability and overall performance.
BlissCam: Boosting Eye Tracking Efficiency with Learned In-Sensor Sparse Sampling
Authors: Yu Feng, Tianrui Ma, Yuhao Zhu, Xuan Zhang
Abstract
Eye tracking is becoming an increasingly important task domain in emerging computing platforms such as Augmented/Virtual Reality (AR/VR). Today's eye tracking system suffers from long end-to-end tracking latency and can easily eat up half of the power budget of a mobile VR device. Most existing optimization efforts exclusively focus on the computation pipeline by optimizing the algorithm and/or designing dedicated accelerators while largely ignoring the front-end of any eye tracking pipeline: the image sensor. This paper makes a case for co-designing the imaging system with the computing system. In particular, we propose the notion of "in-sensor sparse sampling", whereby the pixels are drastically downsampled (by 20x) within the sensor. Such in-sensor sampling enhances the overall tracking efficiency by significantly reducing 1) the power consumption of the sensor readout chain and sensor-host communication interfaces, two major power contributors, and 2) the work done on the host, which receives and operates on far fewer pixels. With careful reuse of existing pixel circuitry, our proposed BLISSCAM requires little hardware augmentation to support the in-sensor operations. Our synthesis results show up to 8.2x energy reduction and 1.4x latency reduction over existing eye tracking pipelines.
Generalizing the SINDy approach with nested neural networks
Authors: Camilla Fiorini, Clément Flint, Louis Fostier, Emmanuel Franck, Reyhaneh Hashemi, Victor Michel-Dansac, Wassim Tenachi
Abstract
Symbolic Regression (SR) is a widely studied field of research that aims to infer symbolic expressions from data. A popular approach for SR is the Sparse Identification of Nonlinear Dynamical Systems (\sindy) framework, which uses sparse regression to identify governing equations from data. This study introduces an enhanced method, Nested SINDy, that aims to increase the expressivity of the SINDy approach thanks to a nested structure. Indeed, traditional symbolic regression and system identification methods often fail with complex systems that cannot be easily described analytically. Nested SINDy builds on the SINDy framework by introducing additional layers before and after the core SINDy layer. This allows the method to identify symbolic representations for a wider range of systems, including those with compositions and products of functions. We demonstrate the ability of the Nested SINDy approach to accurately find symbolic expressions for simple systems, such as basic trigonometric functions, and sparse (false but accurate) analytical representations for more complex systems. Our results highlight Nested SINDy's potential as a tool for symbolic regression, surpassing the traditional SINDy approach in terms of expressivity. However, we also note the challenges in the optimization process for Nested SINDy and suggest future research directions, including the designing of a more robust methodology for the optimization process. This study proves that Nested SINDy can effectively discover symbolic representations of dynamical systems from data, offering new opportunities for understanding complex systems through data-driven methods.
3D Face Morphing Attack Generation using Non-Rigid Registration
Authors: Jag Mohan Singh, Raghavendra Ramachandra
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Face Recognition Systems (FRS) are widely used in commercial environments, such as e-commerce and e-banking, owing to their high accuracy in real-world conditions. However, these systems are vulnerable to facial morphing attacks, which are generated by blending face color images of different subjects. This paper presents a new method for generating 3D face morphs from two bona fide point clouds. The proposed method first selects bona fide point clouds with neutral expressions. The two input point clouds were then registered using a Bayesian Coherent Point Drift (BCPD) without optimization, and the geometry and color of the registered point clouds were averaged to generate a face morphing point cloud. The proposed method generates 388 face-morphing point clouds from 200 bona fide subjects. The effectiveness of the method was demonstrated through extensive vulnerability experiments, achieving a Generalized Morphing Attack Potential (G-MAP) of 97.93%, which is superior to the existing state-of-the-art (SOTA) with a G-MAP of 81.61%.
Real-Time Compressed Sensing for Joint Hyperspectral Image Transmission and Restoration for CubeSat
Abstract
This paper addresses the challenges associated with hyperspectral image (HSI) reconstruction from miniaturized satellites, which often suffer from stripe effects and are computationally resource-limited. We propose a Real-Time Compressed Sensing (RTCS) network designed to be lightweight and require only relatively few training samples for efficient and robust HSI reconstruction in the presence of the stripe effect and under noisy transmission conditions. The RTCS network features a simplified architecture that reduces the required training samples and allows for easy implementation on integer-8-based encoders, facilitating rapid compressed sensing for stripe-like HSI, which exactly matches the moderate design of miniaturized satellites on push broom scanning mechanism. This contrasts optimization-based models that demand high-precision floating-point operations, making them difficult to deploy on edge devices. Our encoder employs an integer-8-compatible linear projection for stripe-like HSI data transmission, ensuring real-time compressed sensing. Furthermore, based on the novel two-streamed architecture, an efficient HSI restoration decoder is proposed for the receiver side, allowing for edge-device reconstruction without needing a sophisticated central server. This is particularly crucial as an increasing number of miniaturized satellites necessitates significant computing resources on the ground station. Extensive experiments validate the superior performance of our approach, offering new and vital capabilities for existing miniaturized satellite systems.
Large Language Models as In-context AI Generators for Quality-Diversity
Abstract
Quality-Diversity (QD) approaches are a promising direction to develop open-ended processes as they can discover archives of high-quality solutions across diverse niches. While already successful in many applications, QD approaches usually rely on combining only one or two solutions to generate new candidate solutions. As observed in open-ended processes such as technological evolution, wisely combining large diversity of these solutions could lead to more innovative solutions and potentially boost the productivity of QD search. In this work, we propose to exploit the pattern-matching capabilities of generative models to enable such efficient solution combinations. We introduce In-context QD, a framework of techniques that aim to elicit the in-context capabilities of pre-trained Large Language Models (LLMs) to generate interesting solutions using the QD archive as context. Applied to a series of common QD domains, In-context QD displays promising results compared to both QD baselines and similar strategies developed for single-objective optimization. Additionally, this result holds across multiple values of parameter sizes and archive population sizes, as well as across domains with distinct characteristics from BBO functions to policy search. Finally, we perform an extensive ablation that highlights the key prompt design considerations that encourage the generation of promising solutions for QD.
Empirical Analysis of the Dynamic Binary Value Problem with IOHprofiler
Authors: Diederick Vermetten, Johannes Lengler, Dimitri Rusin, Thomas Bäck, Carola Doerr
Subjects: Neural and Evolutionary Computing (cs.NE)
Abstract
Optimization problems in dynamic environments have recently been the source of several theoretical studies. One of these problems is the monotonic Dynamic Binary Value problem, which theoretically has high discriminatory power between different Genetic Algorithms. Given this theoretical foundation, we integrate several versions of this problem into the IOHprofiler benchmarking framework. Using this integration, we perform several large-scale benchmarking experiments to both recreate theoretical results on moderate dimensional problems and investigate aspects of GA's performance which have not yet been studied theoretically. Our results highlight some of the many synergies between theory and benchmarking and offer a platform through which further research into dynamic optimization problems can be performed.
Extending Cislunar Communication Network Reach Using Reconfigurable Intelligent Surfaces
Authors: Aamer Mohamed Huroon, Baris Donmez, Yu-Chih Huang, Gunes Karabulut Kurt, Li-Chun Wang
Abstract
This study introduces a novel approach to enhance communication networks in the cislunar space by leveraging Reconfigurable Intelligent Surfaces (RIS). Using the ability of RIS to dynamically control electromagnetic waves, this paper tackles the challenges of signal attenuation, directivity, and divergence in cislunar missions, primarily caused by immense distances and that Earth-based station transmitters do not always face the Moon. A new optimization problem is formulated, whose objective is to maximize the received signal-to-noise ratio (SNR) for Earth-to-Moon communications. We derive a closed-form solution to the problem of determining the optimal RIS phase shift configuration based on the effective area of the RIS. Through extensive simulations, this paper demonstrates how optimal adjustments in RIS phase shifts can significantly enhance signal integrity, hinting at the substantial potential of RIS technology to revolutionize long-distance cislunar communication.
Optimizing Energy Efficiency of 5G RedCap Beam Management for Smart Agriculture Applications
Abstract
Beam management in 5G NR involves the transmission and reception of control signals such as Synchronization Signal Blocks (SSBs), crucial for tasks like initial access and/or channel estimation. However, this procedure consumes energy, which is particularly challenging to handle for battery-constrained nodes such as RedCap devices. Specifically, in this work we study a mid-market Internet of Things (IoT) Smart Agriculture (SmA) deployment where an Unmanned Autonomous Vehicle (UAV) acts as a base station "from the sky" (UAV-gNB) to monitor and control ground User Equipments (UEs) in the field. Then, we formalize a multi-variate optimization problem to determine the optimal beam management design for RedCap SmA devices in order to reduce the energy consumption at the UAV-gNB. Specifically, we jointly optimize the transmission power and the beamwidth at the UAV-gNB. Based on the analysis, we derive the so-called "regions of feasibility," i.e., the upper limit(s) of the beam management parameters for which RedCap Quality of Service (QoS) and energy constraints are met. We study the impact of factors like the total transmission power at the gNB, the Signal-to-Noise Ratio (SNR) threshold for successful packet decoding, the number of UEs in the region, and the misdetection probability. Simulation results demonstrate that there exists an optimal configuration for beam management to promote energy efficiency, which depends on the speed of the UEs, the beamwidth, and other network parameters.
A Generalization of Relative Entropy to Count Vectors and its Concentration Property
Abstract
We introduce a new generalization of relative entropy to non-negative vectors with sums $\gt 1$. We show in a purely combinatorial setting, with no probabilistic considerations, that in the presence of linear constraints defining a convex polytope, a concentration phenomenon arises for this generalized relative entropy, and we quantify the concentration precisely. We also present a probabilistic formulation, and extend the concentration results to it. In addition, we provide a number of simplifications and improvements to our previous work, notably in dualizing the optimization problem, in the concentration with respect to $\ell_{\infty}$ distance, and in the relationship to generalized KL-divergence. A number of our results apply to general compact convex sets, not necessarily polyhedral.
Hardness and Tight Approximations of Demand Strip Packing
Abstract
We settle the pseudo-polynomial complexity of the Demand Strip Packing (DSP) problem: Given a strip of fixed width and a set of items with widths and heights, the items must be placed inside the strip with the objective of minimizing the peak height. This problem has gained significant scientific interest due to its relevance in smart grids[Deppert et al.\ APPROX'21, G\'alvez et al.\ APPROX'21]. Smart Grids are a modern form of electrical grid that provide opportunities for optimization. They are forecast to impact the future of energy provision significantly. Algorithms running in pseudo-polynomial time lend themselves to these applications as considered time intervals, such as days, are small. Moreover, such algorithms can provide superior approximation guarantees over those running in polynomial time. Consequently, they evoke scientific interest in related problems. We prove that Demand Strip Packing is strongly NP-hard for approximation ratios below $5/4$. Through this proof, we provide novel insights into the relation of packing and scheduling problems. Using these insights, we show a series of frameworks that solve both Demand Strip Packing and Parallel Task Scheduling optimally when increasing the strip's width or number of machines. Such alterations to problems are known as resource augmentation. Applications are found when penalty costs are prohibitively large. Finally, we provide a pseudo-polynomial time approximation algorithm for DSP with an approximation ratio of $(5/4+\varepsilon)$, which is nearly optimal assuming $P\neq NP$. The construction of this algorithm provides several insights into the structure of DSP solutions and uses novel techniques to restructure optimal solutions.
Mixed Supervised Graph Contrastive Learning for Recommendation
Authors: Weizhi Zhang, Liangwei Yang, Zihe Song, Henry Peng Zou, Ke Xu, Yuanjie Zhu, Philip S. Yu
Subjects: Information Retrieval (cs.IR); Machine Learning (cs.LG)
Abstract
Recommender systems (RecSys) play a vital role in online platforms, offering users personalized suggestions amidst vast information. Graph contrastive learning aims to learn from high-order collaborative filtering signals with unsupervised augmentation on the user-item bipartite graph, which predominantly relies on the multi-task learning framework involving both the pair-wise recommendation loss and the contrastive loss. This decoupled design can cause inconsistent optimization direction from different losses, which leads to longer convergence time and even sub-optimal performance. Besides, the self-supervised contrastive loss falls short in alleviating the data sparsity issue in RecSys as it learns to differentiate users/items from different views without providing extra supervised collaborative filtering signals during augmentations. In this paper, we propose Mixed Supervised Graph Contrastive Learning for Recommendation (MixSGCL) to address these concerns. MixSGCL originally integrates the training of recommendation and unsupervised contrastive losses into a supervised contrastive learning loss to align the two tasks within one optimization direction. To cope with the data sparsity issue, instead unsupervised augmentation, we further propose node-wise and edge-wise mixup to mine more direct supervised collaborative filtering signals based on existing user-item interactions. Extensive experiments on three real-world datasets demonstrate that MixSGCL surpasses state-of-the-art methods, achieving top performance on both accuracy and efficiency. It validates the effectiveness of MixSGCL with our coupled design on supervised graph contrastive learning.
Learning deep Koopman operators with convex stability constraints
Abstract
In this paper, we present a novel sufficient condition for the stability of discrete-time linear systems that can be represented as a set of piecewise linear constraints, which make them suitable for quadratic programming optimization problems. More specifically, we tackle the problem of imposing asymptotic stability to a Koopman matrix learned from data during iterative gradient descent optimization processes. We show that this sufficient condition can be decoupled by rows of the system matrix, and propose a control barrier function-based projected gradient descent to enforce gradual evolution towards the stability set by running an optimization-in-the-loop during the iterative learning process. We compare the performance of our algorithm with other two recent approaches in the literature, and show that we get close to state-of-the-art performance while providing the added flexibility of allowing the optimization problem to be further customized for specific applications.
Seed Selection in the Heterogeneous Moran Process
Authors: Petros Petsinis, Andreas Pavlogiannis, Josef Tkadlec, Panagiotis Karras
Subjects: Data Structures and Algorithms (cs.DS); Computational Complexity (cs.CC); Social and Information Networks (cs.SI); Populations and Evolution (q-bio.PE)
Abstract
The Moran process is a classic stochastic process that models the rise and takeover of novel traits in network-structured populations. In biological terms, a set of mutants, each with fitness $m\in(0,\infty)$ invade a population of residents with fitness $1$. Each agent reproduces at a rate proportional to its fitness and each offspring replaces a random network neighbor. The process ends when the mutants either fixate (take over the whole population) or go extinct. The fixation probability measures the success of the invasion. To account for environmental heterogeneity, we study a generalization of the Standard process, called the Heterogeneous Moran process. Here, the fitness of each agent is determined both by its type (resident/mutant) and the node it occupies. We study the natural optimization problem of seed selection: given a budget $k$, which $k$ agents should initiate the mutant invasion to maximize the fixation probability? We show that the problem is strongly inapproximable: it is $\mathbf{NP}$-hard to distinguish between maximum fixation probability 0 and 1. We then focus on mutant-biased networks, where each node exhibits at least as large mutant fitness as resident fitness. We show that the problem remains $\mathbf{NP}$-hard, but the fixation probability becomes submodular, and thus the optimization problem admits a greedy $(1-1/e)$-approximation. An experimental evaluation of the greedy algorithm along with various heuristics on real-world data sets corroborates our results.
Abstract
Recent work has developed optimization procedures to find token sequences, called adversarial triggers, which can elicit unsafe responses from aligned language models. These triggers are believed to be universally transferable, i.e., a trigger optimized on one model can jailbreak other models. In this paper, we concretely show that such adversarial triggers are not universal. We extensively investigate trigger transfer amongst 13 open models and observe inconsistent transfer. Our experiments further reveal a significant difference in robustness to adversarial triggers between models Aligned by Preference Optimization (APO) and models Aligned by Fine-Tuning (AFT). We find that APO models are extremely hard to jailbreak even when the trigger is optimized directly on the model. On the other hand, while AFT models may appear safe on the surface, exhibiting refusals to a range of unsafe instructions, we show that they are highly susceptible to adversarial triggers. Lastly, we observe that most triggers optimized on AFT models also generalize to new unsafe instructions from five diverse domains, further emphasizing their vulnerability. Overall, our work highlights the need for more comprehensive safety evaluations for aligned language models.
Keyword: deep learning
Quantifying Social Presence in Mixed Reality: A Contemporary Review of Techniques and Innovations
Abstract
This literature review investigates the transformative potential of mixed reality (MR) technology, where we explore the intersection of contemporary technological advancements, modern deep learning recommendation systems, and social psychology frameworks. This interdisciplinary study informs the understanding of MR's role in improving social presence, catalyzing novel social interactions, and enhancing the quality of interpersonal communication in the real world. We also discuss the challenges and barriers blocking the wide-spread adoption of social networking in MR, such as device constraints, privacy and accessibility concerns, and social norms. Through carefully structured, closed-environment experiments with diverse participants of varying levels of digital literacy, we measure the differences in social dynamics, frequency, quality, and duration of interactions, and levels of social anxiety between MR-enhanced, mobile-enhanced, and control condition participants.
Feature Distribution Shift Mitigation with Contrastive Pretraining for Intrusion Detection
Authors: Weixing Wang, Haojin Yang, Christoph Meinel, Hasan Yagiz Özkan, Cristian Bermudez Serna, Carmen Mas-Machuca
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Networking and Internet Architecture (cs.NI)
Abstract
In recent years, there has been a growing interest in using Machine Learning (ML), especially Deep Learning (DL) to solve Network Intrusion Detection (NID) problems. However, the feature distribution shift problem remains a difficulty, because the change in features' distributions over time negatively impacts the model's performance. As one promising solution, model pretraining has emerged as a novel training paradigm, which brings robustness against feature distribution shift and has proven to be successful in Computer Vision (CV) and Natural Language Processing (NLP). To verify whether this paradigm is beneficial for NID problem, we propose SwapCon, a ML model in the context of NID, which compresses shift-invariant feature information during the pretraining stage and refines during the finetuning stage. We exemplify the evidence of feature distribution shift using the Kyoto2006+ dataset. We demonstrate how pretraining a model with the proper size can increase robustness against feature distribution shifts by over 8%. Moreover, we show how an adequate numerical embedding strategy also enhances the performance of pretrained models. Further experiments show that the proposed SwapCon model also outperforms eXtreme Gradient Boosting (XGBoost) and K-Nearest Neighbor (KNN) based models by a large margin.
Large-Scale Multipurpose Benchmark Datasets For Assessing Data-Driven Deep Learning Approaches For Water Distribution Networks
Authors: Andres Tello, Huy Truong, Alexander Lazovik, Victoria Degeler
Abstract
Currently, the number of common benchmark datasets that researchers can use straight away for assessing data-driven deep learning approaches is very limited. Most studies provide data as configuration files. It is still up to each practitioner to follow a particular data generation method and run computationally intensive simulations to obtain usable data for model training and evaluation. In this work, we provide a collection of datasets that includes several small and medium size publicly available Water Distribution Networks (WDNs), including Anytown, Modena, Balerma, C-Town, D-Town, L-Town, Ky1, Ky6, Ky8, and Ky13. In total 1,394,400 hours of WDNs data operating under normal conditions is made available to the community.
ML-based identification of the interface regions for coupling local and nonlocal models
Authors: Noujoud Nader, Patrick Diehl, Marta D'Elia, Christian Glusa, Serge Prudhomme
Abstract
Local-nonlocal coupling approaches combine the computational efficiency of local models and the accuracy of nonlocal models. However, the coupling process is challenging, requiring expertise to identify the interface between local and nonlocal regions. This study introduces a machine learning-based approach to automatically detect the regions in which the local and nonlocal models should be used in a coupling approach. This identification process uses the loading functions and provides as output the selected model at the grid points. Training is based on datasets of loading functions for which reference coupling configurations are computed using accurate coupled solutions, where accuracy is measured in terms of the relative error between the solution to the coupling approach and the solution to the nonlocal model. We study two approaches that differ from one another in terms of the data structure. The first approach, referred to as the full-domain input data approach, inputs the full load vector and outputs a full label vector. In this case, the classification process is carried out globally. The second approach consists of a window-based approach, where loads are preprocessed and partitioned into windows and the problem is formulated as a node-wise classification approach in which the central point of each window is treated individually. The classification problems are solved via deep learning algorithms based on convolutional neural networks. The performance of these approaches is studied on one-dimensional numerical examples using F1-scores and accuracy metrics. In particular, it is shown that the windowing approach provides promising results, achieving an accuracy of 0.96 and an F1-score of 0.97. These results underscore the potential of the approach to automate coupling processes, leading to more accurate and computationally efficient solutions for material science applications.
Uncertainty in latent representations of variational autoencoders optimized for visual tasks
Authors: Josefina Catoni, Enzo Ferrante, Diego H. Milone, Rodrigo Echeveste
Abstract
Deep learning methods are increasingly becoming instrumental as modeling tools in computational neuroscience, employing optimality principles to build bridges between neural responses and perception or behavior. Developing models that adequately represent uncertainty is however challenging for deep learning methods, which often suffer from calibration problems. This constitutes a difficulty in particular when modeling cortical circuits in terms of Bayesian inference, beyond single point estimates such as the posterior mean or the maximum a posteriori. In this work we systematically studied uncertainty representations in latent representations of variational auto-encoders (VAEs), both in a perceptual task from natural images and in two other canonical tasks of computer vision, finding a poor alignment between uncertainty and informativeness or ambiguities in the images. We next showed how a novel approach which we call explaining-away variational auto-encoders (EA-VAEs), fixes these issues, producing meaningful reports of uncertainty in a variety of scenarios, including interpolation, image corruption, and even out-of-distribution detection. We show EA-VAEs may prove useful both as models of perception in computational neuroscience and as inference tools in computer vision.
NMBEnet: Efficient Near-field mmWave Beam Training for Multiuser OFDM Systems Using Sub-6 GHz Pilots
Authors: Wang Liu, Cunhua Pan, Hong Ren, Cheng-Xiang Wang, Jiangzhou Wang, Xiaohu You
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
Combining millimetre-wave (mmWave) communications with an extremely large-scale antenna array (ELAA) presents a promising avenue for meeting the spectral efficiency demands of the future sixth generation (6G) mobile communications. However, beam training for mmWave ELAA systems is challenged by excessive pilot overheads as well as insufficient accuracy, as the huge near-field codebook has to be accounted for. In this paper, inspired by the similarity between far-field sub-6 GHz channels and near-field mmWave channels, we propose to leverage sub-6 GHz uplink pilot signals to directly estimate the optimal near-field mmWave codeword, which aims to reduce pilot overhead and bypass the channel estimation. Moreover, we adopt deep learning to perform this dual mapping function, i.e., sub-6 GHz to mmWave, far-field to near-field, and a novel neural network structure called NMBEnet is designed to enhance the precision of beam training. Specifically, when considering the orthogonal frequency division multiplexing (OFDM) communication scenarios with high user density, correlations arise both between signals from different users and between signals from different subcarriers. Accordingly, the convolutional neural network (CNN) module and graph neural network (GNN) module included in the proposed NMBEnet can leverage these two correlations to further enhance the precision of beam training.
An MRP Formulation for Supervised Learning: Generalized Temporal Difference Learning Models
Authors: Yangchen Pan, Junfeng Wen, Chenjun Xiao, Philip Torr
Abstract
In traditional statistical learning, data points are usually assumed to be independently and identically distributed (i.i.d.) following an unknown probability distribution. This paper presents a contrasting viewpoint, perceiving data points as interconnected and employing a Markov reward process (MRP) for data modeling. We reformulate the typical supervised learning as an on-policy policy evaluation problem within reinforcement learning (RL), introducing a generalized temporal difference (TD) learning algorithm as a resolution. Theoretically, our analysis draws connections between the solutions of linear TD learning and ordinary least squares (OLS). We also show that under specific conditions, particularly when noises are correlated, the TD's solution proves to be a more effective estimator than OLS. Furthermore, we establish the convergence of our generalized TD algorithms under linear function approximation. Empirical studies verify our theoretical results, examine the vital design of our TD algorithm and show practical utility across various datasets, encompassing tasks such as regression and image classification with deep learning.
Research on OPF control of three-phase four-wire low-voltage distribution network considering uncertainty
Abstract
As power systems become more complex and uncertain, low-voltage distribution networks face numerous challenges, including three-phase imbalances caused by asymmetrical loads and distributed energy resources. We propose a robust stochastic optimization (RSO) based optimal power flow (OPF) control method for three-phase, four-wire low-voltage distribution networks that consider uncertainty to address these issues. Using historical data and deep learning classification methods, the proposed method simulates optimal system behaviour without requiring communication infrastructure. The simulation results verify that the proposed method effectively controls the voltage and current amplitude while minimizing the operational cost and three-phase imbalance within acceptable limits. The proposed method shows promise for managing uncertainties and optimizing performance in low-voltage distribution networks.
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Authors: Liang Qu, Cunze Wang, Yuhui Shi
Subjects: Machine Learning (cs.LG); Image and Video Processing (eess.IV)
Abstract
The application of deep learning techniques to medical problems has garnered widespread research interest in recent years, such as applying convolutional neural networks to medical image classification tasks. However, data in the medical field is often highly private, preventing different hospitals from sharing data to train an accurate model. Federated learning, as a privacy-preserving machine learning architecture, has shown promising performance in balancing data privacy and model utility by keeping private data on the client's side and using a central server to coordinate a set of clients for model training through aggregating their uploaded model parameters. Yet, this architecture heavily relies on a trusted third-party server, which is challenging to achieve in real life. Swarm learning, as a specialized decentralized federated learning architecture that does not require a central server, utilizes blockchain technology to enable direct parameter exchanges between clients. However, the mining of blocks requires significant computational resources, limiting its scalability. To address this issue, this paper integrates the brain storm optimization algorithm into the swarm learning framework, named BSO-SL. This approach clusters similar clients into different groups based on their model distributions. Additionally, leveraging the architecture of BSO, clients are given the probability to engage in collaborative learning both within their cluster and with clients outside their cluster, preventing the model from converging to local optima. The proposed method has been validated on a real-world diabetic retinopathy image classification dataset, and the experimental results demonstrate the effectiveness of the proposed approach.
Security Analysis of WiFi-based Sensing Systems: Threats from Perturbation Attacks
Abstract
Deep learning technologies are pivotal in enhancing the performance of WiFi-based wireless sensing systems. However, they are inherently vulnerable to adversarial perturbation attacks, and regrettably, there is lacking serious attention to this security issue within the WiFi sensing community. In this paper, we elaborate such an attack, called WiIntruder, distinguishing itself with universality, robustness, and stealthiness, which serves as a catalyst to assess the security of existing WiFi-based sensing systems. This attack encompasses the following salient features: (1) Maximizing transferability by differentiating user-state-specific feature spaces across sensing models, leading to a universally effective perturbation attack applicable to common applications; (2) Addressing perturbation signal distortion caused by device synchronization and wireless propagation when critical parameters are optimized through a heuristic particle swarm-driven perturbation generation algorithm; and (3) Enhancing attack pattern diversity and stealthiness through random switching of perturbation surrogates generated by a generative adversarial network. Extensive experimental results confirm the practical threats of perturbation attacks to common WiFi-based services, including user authentication and respiratory monitoring.
A Survey of Deep Long-Tail Classification Advancements
Authors: Charika de Alvis (The University of Sydney, Australia), Suranga Seneviratne (The University of Sydney, Australia)
Abstract
Many data distributions in the real world are hardly uniform. Instead, skewed and long-tailed distributions of various kinds are commonly observed. This poses an interesting problem for machine learning, where most algorithms assume or work well with uniformly distributed data. The problem is further exacerbated by current state-of-the-art deep learning models requiring large volumes of training data. As such, learning from imbalanced data remains a challenging research problem and a problem that must be solved as we move towards more real-world applications of deep learning. In the context of class imbalance, state-of-the-art (SOTA) accuracies on standard benchmark datasets for classification typically fall less than 75%, even for less challenging datasets such as CIFAR100. Nonetheless, there has been progress in this niche area of deep learning. To this end, in this survey, we provide a taxonomy of various methods proposed for addressing the problem of long-tail classification, focusing on works that happened in the last few years under a single mathematical framework. We also discuss standard performance metrics, convergence studies, feature distribution and classifier analysis. We also provide a quantitative comparison of the performance of different SOTA methods and conclude the survey by discussing the remaining challenges and future research direction.
VulEval: Towards Repository-Level Evaluation of Software Vulnerability Detection
Abstract
Deep Learning (DL)-based methods have proven to be effective for software vulnerability detection, with a potential for substantial productivity enhancements for detecting vulnerabilities. Current methods mainly focus on detecting single functions (i.e., intra-procedural vulnerabilities), ignoring the more complex inter-procedural vulnerability detection scenarios in practice. For example, developers routinely engage with program analysis to detect vulnerabilities that span multiple functions within repositories. In addition, the widely-used benchmark datasets generally contain only intra-procedural vulnerabilities, leaving the assessment of inter-procedural vulnerability detection capabilities unexplored. To mitigate the issues, we propose a repository-level evaluation system, named \textbf{VulEval}, aiming at evaluating the detection performance of inter- and intra-procedural vulnerabilities simultaneously. Specifically, VulEval consists of three interconnected evaluation tasks: \textbf{(1) Function-Level Vulnerability Detection}, aiming at detecting intra-procedural vulnerability given a code snippet; \textbf{(2) Vulnerability-Related Dependency Prediction}, aiming at retrieving the most relevant dependencies from call graphs for providing developers with explanations about the vulnerabilities; and \textbf{(3) Repository-Level Vulnerability Detection}, aiming at detecting inter-procedural vulnerabilities by combining with the dependencies identified in the second task. VulEval also consists of a large-scale dataset, with a total of 4,196 CVE entries, 232,239 functions, and corresponding 4,699 repository-level source code in C/C++ programming languages. Our analysis highlights the current progress and future directions for software vulnerability detection.
MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition
Abstract
Emotion decoding using Electroencephalography (EEG)-based affective brain-computer interfaces represents a significant area within the field of affective computing. In the present study, we propose a novel non-deep transfer learning method, termed as Manifold-based Domain adaptation with Dynamic Distribution (MDDD). The proposed MDDD includes four main modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning. The data undergoes a transformation onto an optimal Grassmann manifold space, enabling dynamic alignment of the source and target domains. This process prioritizes both marginal and conditional distributions according to their significance, ensuring enhanced adaptation efficiency across various types of data. In the classifier learning, the principle of structural risk minimization is integrated to develop robust classification models. This is complemented by dynamic distribution alignment, which refines the classifier iteratively. Additionally, the ensemble learning module aggregates the classifiers obtained at different stages of the optimization process, which leverages the diversity of the classifiers to enhance the overall prediction accuracy. The experimental results indicate that MDDD outperforms traditional non-deep learning methods, achieving an average improvement of 3.54%, and is comparable to deep learning methods. This suggests that MDDD could be a promising method for enhancing the utility and applicability of aBCIs in real-world scenarios.
An Electromagnetism-Inspired Method for Estimating In-Grasp Torque from Visuotactile Sensors
Abstract
Tactile sensing has become a popular sensing modality for robot manipulators, due to the promise of providing robots with the ability to measure the rich contact information that gets transmitted through its sense of touch. Among the diverse range of information accessible from tactile sensors, torques transmitted from the grasped object to the fingers through extrinsic environmental contact may be particularly important for tasks such as object insertion. However, tactile torque estimation has received relatively little attention when compared to other sensing modalities, such as force, texture, or slip identification. In this work, we introduce the notion of the Tactile Dipole Moment, which we use to estimate tilt torques from gel-based visuotactile sensors. This method does not rely on deep learning, sensor-specific mechanical, or optical modeling, and instead takes inspiration from electromechanics to analyze the vector field produced from 2D marker displacements. Despite the simplicity of our technique, we demonstrate its ability to provide accurate torque readings over two different tactile sensors and three object geometries, and highlight its practicality for the task of USB stick insertion with a compliant robot arm. These results suggest that simple analytical calculations based on dipole moments can sufficiently extract physical quantities from visuotactile sensors.
A Real-time Evaluation Framework for Pedestrian's Potential Risk at Non-Signalized Intersections Based on Predicted Post-Encroachment Time
Abstract
Addressing pedestrian safety at intersections is one of the paramount concerns in the field of transportation research, driven by the urgency of reducing traffic-related injuries and fatalities. With advances in computer vision technologies and predictive models, the pursuit of developing real-time proactive protection systems is increasingly recognized as vital to improving pedestrian safety at intersections. The core of these protection systems lies in the prediction-based evaluation of pedestrian's potential risks, which plays a significant role in preventing the occurrence of accidents. The major challenges in the current prediction-based potential risk evaluation research can be summarized into three aspects: the inadequate progress in creating a real-time framework for the evaluation of pedestrian's potential risks, the absence of accurate and explainable safety indicators that can represent the potential risk, and the lack of tailor-made evaluation criteria specifically for each category of pedestrians. To address these research challenges, in this study, a framework with computer vision technologies and predictive models is developed to evaluate the potential risk of pedestrians in real time. Integral to this framework is a novel surrogate safety measure, the Predicted Post-Encroachment Time (P-PET), derived from deep learning models capable to predict the arrival time of pedestrians and vehicles at intersections. To further improve the effectiveness and reliability of pedestrian risk evaluation, we classify pedestrians into distinct categories and apply specific evaluation criteria for each group. The results demonstrate the framework's ability to effectively identify potential risks through the use of P-PET, indicating its feasibility for real-time applications and its improved performance in risk evaluation across different categories of pedestrians.
Building-PCC: Building Point Cloud Completion Benchmarks
Authors: Weixiao Gao, Ravi Peters, Jantien Stoter
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
With the rapid advancement of 3D sensing technologies, obtaining 3D shape information of objects has become increasingly convenient. Lidar technology, with its capability to accurately capture the 3D information of objects at long distances, has been widely applied in the collection of 3D data in urban scenes. However, the collected point cloud data often exhibit incompleteness due to factors such as occlusion, signal absorption, and specular reflection. This paper explores the application of point cloud completion technologies in processing these incomplete data and establishes a new real-world benchmark Building-PCC dataset, to evaluate the performance of existing deep learning methods in the task of urban building point cloud completion. Through a comprehensive evaluation of different methods, we analyze the key challenges faced in building point cloud completion, aiming to promote innovation in the field of 3D geoinformation applications. Our source code is available at https://github.com/tudelft3d/Building-PCC-Building-Point-Cloud-Completion-Benchmarks.git.
Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision
Authors: Mohammad Reza Hosseinzadeh Taher, Michael B. Gotway, Jianming Liang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Humans effortlessly interpret images by parsing them into part-whole hierarchies; deep learning excels in learning multi-level feature spaces, but they often lack explicit coding of part-whole relations, a prominent property of medical imaging. To overcome this limitation, we introduce Adam-v2, a new self-supervised learning framework extending Adam [79] by explicitly incorporating part-whole hierarchies into its learning objectives through three key branches: (1) Localizability, acquiring discriminative representations to distinguish different anatomical patterns; (2) Composability, learning each anatomical structure in a parts-to-whole manner; and (3) Decomposability, comprehending each anatomical structure in a whole-to-parts manner. Experimental results across 10 tasks, compared to 11 baselines in zero-shot, few-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior performance over large-scale medical models and existing SSL methods across diverse downstream tasks. The higher generality and robustness of Adam-v2's representations originate from its explicit construction of hierarchies for distinct anatomical structures from unlabeled medical images. Adam-v2 preserves a semantic balance of anatomical diversity and harmony in its embedding, yielding representations that are both generic and semantically meaningful, yet overlooked in existing SSL methods. All code and pretrained models are available at https://github.com/JLiangLab/Eden.
Deep Learning for Accelerated and Robust MRI Reconstruction: a Review
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction. It focuses on DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. These include end-to-end neural networks, pre-trained networks, generative models, and self-supervised methods. The paper also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling subtle bias. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images
Authors: Orazio Pontorno (1), Luca Guarnera (1), Sebastiano Battiato (1) ((1) University of Catania)
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics. The scientific community is working to develop approaches that can discriminate the origin of digital images (real or AI-generated). However, these methodologies face the challenge of generalization, that is, the ability to discern the nature of an image even if it is generated by an architecture not seen during training. This usually leads to a drop in performance. In this context, we propose a novel approach based on three blocks called Base Models, each of which is responsible for extracting the discriminative features of a specific image class (Diffusion Model-generated, GAN-generated, or real) as it is trained by exploiting deliberately unbalanced datasets. The features extracted from each block are then concatenated and processed to discriminate the origin of the input image. Experimental results showed that this approach not only demonstrates good robust capabilities to JPEG compression but also outperforms state-of-the-art methods in several generalization tests. Code, models and dataset are available at https://github.com/opontorno/block-based_deepfake-detection.
No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement
Authors: Mateusz Klimaszewski, Piotr Andruszkiewicz, Alexandra Birch
Abstract
Modular deep learning is the state-of-the-art solution for lifting the curse of multilinguality, preventing the impact of negative interference and enabling cross-lingual performance in Multilingual Pre-trained Language Models. However, a trade-off of this approach is the reduction in positive transfer learning from closely related languages. In response, we introduce a novel method called language arithmetic, which enables training-free post-processing to address this limitation. Inspired by the task arithmetic framework, we apply learning via addition to the language adapters, transitioning the framework from a multi-task to a multilingual setup. The effectiveness of the proposed solution is demonstrated on three downstream tasks in a MAD-X-based set of cross-lingual schemes, acting as a post-processing procedure. Language arithmetic consistently improves the baselines with significant gains in the most challenging cases of zero-shot and low-resource applications. Our code and models are available at https://github.com/mklimasz/language-arithmetic .
Bi-Mamba4TS: Bidirectional Mamba for Time Series Forecasting
Authors: Aobo Liang, Xingguo Jiang, Yan Sun, Chang Lu
Abstract
Long-term time series forecasting (LTSF) provides longer insights into future trends and patterns. In recent years, deep learning models especially Transformers have achieved advanced performance in LTSF tasks. However, the quadratic complexity of Transformers rises the challenge of balancing computaional efficiency and predicting performance. Recently, a new state space model (SSM) named Mamba is proposed. With the selective capability on input data and the hardware-aware parallel computing algorithm, Mamba can well capture long-term dependencies while maintaining linear computational complexity. Mamba has shown great ability for long sequence modeling and is a potential competitor to Transformer-based models in LTSF. In this paper, we propose Bi-Mamba4TS, a bidirectional Mamba for time series forecasting. To address the sparsity of time series semantics, we adopt the patching technique to enrich the local information while capturing the evolutionary patterns of time series in a finer granularity. To select more appropriate modeling method based on the characteristics of the dataset, our model unifies the channel-independent and channel-mixing tokenization strategies and uses a series-relation-aware decider to control the strategy choosing process. Extensive experiments on seven real-world datasets show that our model achieves more accurate predictions compared with state-of-the-art methods.
Toward Physics-Aware Deep Learning Architectures for LiDAR Intensity Simulation
Abstract
Autonomous vehicles (AVs) heavily rely on LiDAR perception for environment understanding and navigation. LiDAR intensity provides valuable information about the reflected laser signals and plays a crucial role in enhancing the perception capabilities of AVs. However, accurately simulating LiDAR intensity remains a challenge due to the unavailability of material properties of the objects in the environment, and complex interactions between the laser beam and the environment. The proposed method aims to improve the accuracy of intensity simulation by incorporating physics-based modalities within the deep learning framework. One of the key entities that captures the interaction between the laser beam and the objects is the angle of incidence. In this work we demonstrate that the addition of the LiDAR incidence angle as a separate input to the deep neural networks significantly enhances the results. We present a comparative study between two prominent deep learning architectures: U-NET a Convolutional Neural Network (CNN), and Pix2Pix a Generative Adversarial Network (GAN). We implemented these two architectures for the intensity prediction task and used SemanticKITTI and VoxelScape datasets for experiments. The comparative analysis reveals that both architectures benefit from the incidence angle as an additional input. Moreover, the Pix2Pix architecture outperforms U-NET, especially when the incidence angle is incorporated.
Employing Two-Dimensional Word Embedding for Difficult Tabular Data Stream Classification
Abstract
Rapid technological advances are inherently linked to the increased amount of data, a substantial portion of which can be interpreted as data stream, capable of exhibiting the phenomenon of concept drift and having a high imbalance ratio. Consequently, developing new approaches to classifying difficult data streams is a rapidly growing research area. At the same time, the proliferation of deep learning and transfer learning, as well as the success of convolutional neural networks in computer vision tasks, have contributed to the emergence of a new research trend, namely Multi-Dimensional Encoding (MDE), focusing on transforming tabular data into a homogeneous form of a discrete digital signal. This paper proposes Streaming Super Tabular Machine Learning (SSTML), thereby exploring for the first time the potential of MDE in the difficult data stream classification task. SSTML encodes consecutive data chunks into an image representation using the STML algorithm and then performs a single ResNet-18 training epoch. Experiments conducted on synthetic and real data streams have demonstrated the ability of SSTML to achieve classification quality statistically significantly superior to state-of-the-art algorithms while maintaining comparable processing time.
On the Fourier analysis in the SO(3) space : EquiLoPO Network
Authors: Dmitrii Zhemchuzhnikov, Sergei Grudinin
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Group Theory (math.GR)
Abstract
Analyzing volumetric data with rotational invariance or equivariance is an active topic in current research. Existing deep-learning approaches utilize either group convolutional networks limited to discrete rotations or steerable convolutional networks with constrained filter structures. This work proposes a novel equivariant neural network architecture that achieves analytical Equivariance to Local Pattern Orientation on the continuous SO(3) group while allowing unconstrained trainable filters - EquiLoPO Network. Our key innovations are a group convolutional operation leveraging irreducible representations as the Fourier basis and a local activation function in the SO(3) space that provides a well-defined mapping from input to output functions, preserving equivariance. By integrating these operations into a ResNet-style architecture, we propose a model that overcomes the limitations of prior methods. A comprehensive evaluation on diverse 3D medical imaging datasets from MedMNIST3D demonstrates the effectiveness of our approach, which consistently outperforms state of the art. This work suggests the benefits of true rotational equivariance on SO(3) and flexible unconstrained filters enabled by the local activation function, providing a flexible framework for equivariant deep learning on volumetric data with potential applications across domains. Our code is publicly available at \url{https://gricad-gitlab.univ-grenoble-alpes.fr/GruLab/ILPO/-/tree/main/EquiLoPO}.
Keyword: differential privacy
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
Keyword: privacy
Quantifying Social Presence in Mixed Reality: A Contemporary Review of Techniques and Innovations
ControlTraj: Controllable Trajectory Generation with Topology-Constrained Diffusion Model
Advances and Open Challenges in Federated Learning with Foundation Models
Introduction to Eye Tracking: A Hands-On Tutorial for Students and Practitioners
SMI-5: Five Dimensions of Social Media Interaction for Platform (De)Centralization
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Federated Learning with Only Positive Labels by Exploring Label Correlations
Non-Fungible Programs: Private Full-Stack Applications for Web3
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
APACHE: A Processing-Near-Memory Architecture for Multi-Scheme Fully Homomorphic Encryption
SynthEval: A Framework for Detailed Utility and Privacy Evaluation of Tabular Synthetic Data
Secure and Privacy-Preserving Authentication for Data Subject Rights Enforcement
Privacy-Preserving Billing for Local Energy Markets (Long Version)
An Element-Wise Weights Aggregation Method for Federated Learning
BeSound: Bluetooth-Based Position Estimation Enhancing with Cross-Modality Distillation
Keyword: machine learning
An Optimized Framework for Processing Large-scale Polysomnographic Data Incorporating Expert Human Oversight
Using Large Language Models to Enrich the Documentation of Datasets for Machine Learning
Feature Distribution Shift Mitigation with Contrastive Pretraining for Intrusion Detection
ML-based identification of the interface regions for coupling local and nonlocal models
Naïve Bayes and Random Forest for Crop Yield Prediction
Training all-mechanical neural networks for task learning through in situ backpropagation
Drop-Connect as a Fault-Tolerance Approach for RRAM-based Deep Neural Network Accelerators
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
A Survey of Deep Long-Tail Classification Advancements
MISLEAD: Manipulating Importance of Selected features for Learning Epsilon in Evasion Attack Deception
Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems
Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
Exploring Machine Learning Algorithms for Infection Detection Using GC-IMS Data: A Preliminary Study
SynthEval: A Framework for Detailed Utility and Privacy Evaluation of Tabular Synthetic Data
A Configurable and Efficient Memory Hierarchy for Neural Network Hardware Accelerator
OpTC -- A Toolchain for Deployment of Neural Networks on AURIX TC3xx Microcontrollers
Employing Two-Dimensional Word Embedding for Difficult Tabular Data Stream Classification
Near-Optimal Wafer-Scale Reduce
ST-MambaSync: The Confluence of Mamba Structure and Spatio-Temporal Transformers for Precipitous Traffic Prediction
An Element-Wise Weights Aggregation Method for Federated Learning
Generalization Measures for Zero-Shot Cross-Lingual Transfer
Boosting Architectural Generation via Prompts: Report
A comprehensive and easy-to-use multi-domain multi-task medical imaging meta-dataset (MedIMeta)
Unimodal and Multimodal Sensor Fusion for Wearable Activity Recognition
Keyword: optimization
On the implementation in Abaqus of the global--local iterative coupling and acceleration techniques
Facilitating Human Feedback for GenAI Prompt Optimization
Advances and Open Challenges in Federated Learning with Foundation Models
Metric3D v2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation
Research on OPF control of three-phase four-wire low-voltage distribution network considering uncertainty
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Decentralized Multi-Agent Trajectory Planning in Dynamic Environments with Spatiotemporal Occupancy Grid Maps
MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition
DPO: Differential reinforcement learning with application to optimal configuration search
Dynamic Beam Coverage for Satellite Communications Aided by Movable-Antenna Array
MalleTrain: Deep Neural Network Training on Unfillable Supercomputer Nodes
HTAP Databases: A Survey
Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models
Generative Diffusion Model (GDM) for Optimization of Wi-Fi Networks
Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy
Delay-Aware Multi-Agent Reinforcement Learning for Cooperative Adaptive Cruise Control with Model-based Stability Enhancement
BlissCam: Boosting Eye Tracking Efficiency with Learned In-Sensor Sparse Sampling
Generalizing the SINDy approach with nested neural networks
3D Face Morphing Attack Generation using Non-Rigid Registration
Real-Time Compressed Sensing for Joint Hyperspectral Image Transmission and Restoration for CubeSat
Large Language Models as In-context AI Generators for Quality-Diversity
Empirical Analysis of the Dynamic Binary Value Problem with IOHprofiler
Extending Cislunar Communication Network Reach Using Reconfigurable Intelligent Surfaces
Optimizing Energy Efficiency of 5G RedCap Beam Management for Smart Agriculture Applications
A Generalization of Relative Entropy to Count Vectors and its Concentration Property
Hardness and Tight Approximations of Demand Strip Packing
Mixed Supervised Graph Contrastive Learning for Recommendation
Learning deep Koopman operators with convex stability constraints
Seed Selection in the Heterogeneous Moran Process
Universal Adversarial Triggers Are Not Universal
Keyword: deep learning
Quantifying Social Presence in Mixed Reality: A Contemporary Review of Techniques and Innovations
Feature Distribution Shift Mitigation with Contrastive Pretraining for Intrusion Detection
Large-Scale Multipurpose Benchmark Datasets For Assessing Data-Driven Deep Learning Approaches For Water Distribution Networks
ML-based identification of the interface regions for coupling local and nonlocal models
Uncertainty in latent representations of variational autoencoders optimized for visual tasks
NMBEnet: Efficient Near-field mmWave Beam Training for Multiuser OFDM Systems Using Sub-6 GHz Pilots
An MRP Formulation for Supervised Learning: Generalized Temporal Difference Learning Models
Research on OPF control of three-phase four-wire low-voltage distribution network considering uncertainty
Brain Storm Optimization Based Swarm Learning for Diabetic Retinopathy Image Classification
Security Analysis of WiFi-based Sensing Systems: Threats from Perturbation Attacks
A Survey of Deep Long-Tail Classification Advancements
VulEval: Towards Repository-Level Evaluation of Software Vulnerability Detection
MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition
An Electromagnetism-Inspired Method for Estimating In-Grasp Torque from Visuotactile Sensors
A Real-time Evaluation Framework for Pedestrian's Potential Risk at Non-Signalized Intersections Based on Predicted Post-Encroachment Time
Building-PCC: Building Point Cloud Completion Benchmarks
Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision
Deep Learning for Accelerated and Robust MRI Reconstruction: a Review
DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images
No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement
Bi-Mamba4TS: Bidirectional Mamba for Time Series Forecasting
Toward Physics-Aware Deep Learning Architectures for LiDAR Intensity Simulation
Employing Two-Dimensional Word Embedding for Difficult Tabular Data Stream Classification
On the Fourier analysis in the SO(3) space : EquiLoPO Network