Abstract
Accurate epidemic forecasting plays a vital role for governments in developing effective prevention measures for suppressing epidemics. Most of the present spatio-temporal models cannot provide a general framework for stable, and accurate forecasting of epidemics with diverse evolution trends. Incorporating epidemiological domain knowledge ranging from single-patch to multi-patch into neural networks is expected to improve forecasting accuracy. However, relying solely on single-patch knowledge neglects inter-patch interactions, while constructing multi-patch knowledge is challenging without population mobility data. To address the aforementioned problems, we propose a novel hybrid model called Metapopulation-based Spatio-Temporal Attention Network (MPSTAN). This model aims to improve the accuracy of epidemic forecasting by incorporating multi-patch epidemiological knowledge into a spatio-temporal model and adaptively defining inter-patch interactions. Moreover, we incorporate inter-patch epidemiological knowledge into both the model construction and loss function to help the model learn epidemic transmission dynamics. Extensive experiments conducted on two representative datasets with different epidemiological evolution trends demonstrate that our proposed model outperforms the baselines and provides more accurate and stable short- and long-term forecasting. We confirm the effectiveness of domain knowledge in the learning model and investigate the impact of different ways of integrating domain knowledge on forecasting. We observe that using domain knowledge in both model construction and loss functions leads to more efficient forecasting, and selecting appropriate domain knowledge can improve accuracy further.
Pushing the Limits of Machine Design: Automated CPU Design with AI
Abstract
Design activity -- constructing an artifact description satisfying given goals and constraints -- distinguishes humanity from other animals and traditional machines, and endowing machines with design abilities at the human level or beyond has been a long-term pursuit. Though machines have already demonstrated their abilities in designing new materials, proteins, and computer programs with advanced artificial intelligence (AI) techniques, the search space for designing such objects is relatively small, and thus, "Can machines design like humans?" remains an open question. To explore the boundary of machine design, here we present a new AI approach to automatically design a central processing unit (CPU), the brain of a computer, and one of the world's most intricate devices humanity have ever designed. This approach generates the circuit logic, which is represented by a graph structure called Binary Speculation Diagram (BSD), of the CPU design from only external input-output observations instead of formal program code. During the generation of BSD, Monte Carlo-based expansion and the distance of Boolean functions are used to guarantee accuracy and efficiency, respectively. By efficiently exploring a search space of unprecedented size 10^{10^{540}}, which is the largest one of all machine-designed objects to our best knowledge, and thus pushing the limits of machine design, our approach generates an industrial-scale RISC-V CPU within only 5 hours. The taped-out CPU successfully runs the Linux operating system and performs comparably against the human-designed Intel 80486SX CPU. In addition to learning the world's first CPU only from input-output observations, which may reform the semiconductor industry by significantly reducing the design cycle, our approach even autonomously discovers human knowledge of the von Neumann architecture.
Deep Dynamic Epidemiological Modelling for COVID-19 Forecasting in Multi-level Districts
Authors: Ruhan Liu, Jiajia Li, Yang Wen, Huating Li, Ping Zhang, Bin Sheng, David Dagan Feng
Abstract
Objective: COVID-19 has spread worldwide and made a huge influence across the world. Modeling the infectious spread situation of COVID-19 is essential to understand the current condition and to formulate intervention measurements. Epidemiological equations based on the SEIR model simulate disease development. The traditional parameter estimation method to solve SEIR equations could not precisely fit real-world data due to different situations, such as social distancing policies and intervention strategies. Additionally, learning-based models achieve outstanding fitting performance, but cannot visualize mechanisms. Methods: Thus, we propose a deep dynamic epidemiological (DDE) method that combines epidemiological equations and deep-learning advantages to obtain high accuracy and visualization. The DDE contains deep networks to fit the effect function to simulate the ever-changing situations based on the neural ODE method in solving variants' equations, ensuring the fitting performance of multi-level areas. Results: We introduce four SEIR variants to fit different situations in different countries and regions. We compare our DDE method with traditional parameter estimation methods (Nelder-Mead, BFGS, Powell, Truncated Newton Conjugate-Gradient, Neural ODE) in fitting the real-world data in the cases of countries (the USA, Columbia, South Africa) and regions (Wuhan in China, Piedmont in Italy). Our DDE method achieves the best Mean Square Error and Pearson coefficient in all five areas. Further, compared with the state-of-art learning-based approaches, the DDE outperforms all techniques, including LSTM, RNN, GRU, Random Forest, Extremely Random Trees, and Decision Tree. Conclusion: DDE presents outstanding predictive ability and visualized display of the changes in infection rates in different regions and countries.
Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free Inference
Abstract
Advancements in adapting deep convolution architectures for Spiking Neural Networks (SNNs) have significantly enhanced image classification performance and reduced computational burdens. However, the inability of Multiplication-Free Inference (MFI) to harmonize with attention and transformer mechanisms, which are critical to superior performance on high-resolution vision tasks, imposes limitations on these gains. To address this, our research explores a new pathway, drawing inspiration from the progress made in Multi-Layer Perceptrons (MLPs). We propose an innovative spiking MLP architecture that uses batch normalization to retain MFI compatibility and introduces a spiking patch encoding layer to reinforce local feature extraction capabilities. As a result, we establish an efficient multi-stage spiking MLP network that effectively blends global receptive fields with local feature extraction for comprehensive spike-based computation. Without relying on pre-training or sophisticated SNN training techniques, our network secures a top-1 accuracy of 66.39% on the ImageNet-1K dataset, surpassing the directly trained spiking ResNet-34 by 2.67%. Furthermore, we curtail computational costs, model capacity, and simulation steps. An expanded version of our network challenges the performance of the spiking VGG-16 network with a 71.64% top-1 accuracy, all while operating with a model capacity 2.1 times smaller. Our findings accentuate the potential of our deep SNN architecture in seamlessly integrating global and local learning abilities. Interestingly, the trained receptive field in our network mirrors the activity patterns of cortical cells.
Polynomial Logical Zonotopes: A Set Representation for Reachability Analysis of Logical Systems
Authors: Amr Alanwar, Frank J. Jiang, Karl H. Johansson
Subjects: Logic in Computer Science (cs.LO); Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS); Systems and Control (eess.SY)
Abstract
In this paper, we introduce a set representation called polynomial logical zonotopes for performing exact and computationally efficient reachability analysis on logical systems. Polynomial logical zonotopes are a generalization of logical zonotopes, which are able to represent up to 2^n binary vectors using only n generators. Due to their construction, logical zonotopes are only able to support exact computations of some logical operations (XOR, NOT, XNOR), while other operations (AND, NAND, OR, NOR) result in over-approximations. In order to perform all fundamental logical operations exactly, we formulate a generalization of logical zonotopes that is constructed by additional dependent generators and exponent matrices. We prove that through this polynomial-like construction, we are able to perform all of the fundamental logical operations (XOR, NOT, XNOR, AND, NAND, OR, NOR) exactly. While we are able to perform all of the logical operations exactly, this comes with a slight increase in computational complexity compared to logical zonotopes. We show that we can use polynomial logical zonotopes to perform exact reachability analysis while retaining a low computational complexity. To illustrate and showcase the computational benefits of polynomial logical zonotopes, we present the results of performing reachability analysis on two use cases: (1) safety verification of an intersection crossing protocol, (2) and reachability analysis on a high-dimensional Boolean function. Moreover, to highlight the extensibility of logical zonotopes, we include an additional use case where we perform a computationally tractable exhaustive search for the key of a linear-feedback shift register.
Auto-BI: Automatically Build BI-Models Leveraging Local Join Prediction and Global Schema Graph
Abstract
Business Intelligence (BI) is crucial in modern enterprises and billion-dollar business. Traditionally, technical experts like database administrators would manually prepare BI-models (e.g., in star or snowflake schemas) that join tables in data warehouses, before less-technical business users can run analytics using end-user dashboarding tools. However, the popularity of self-service BI (e.g., Tableau and Power-BI) in recent years creates a strong demand for less technical end-users to build BI-models themselves. We develop an Auto-BI system that can accurately predict BI models given a set of input tables, using a principled graph-based optimization problem we propose called \textit{k-Min-Cost-Arborescence} (k-MCA), which holistically considers both local join prediction and global schema-graph structures, leveraging a graph-theoretical structure called \textit{arborescence}. While we prove k-MCA is intractable and inapproximate in general, we develop novel algorithms that can solve k-MCA optimally, which is shown to be efficient in practice with sub-second latency and can scale to the largest BI-models we encounter (with close to 100 tables). Auto-BI is rigorously evaluated on a unique dataset with over 100K real BI models we harvested, as well as on 4 popular TPC benchmarks. It is shown to be both efficient and accurate, achieving over 0.9 F1-score on both real and synthetic benchmarks.
FFCV: Accelerating Training by Removing Data Bottlenecks
Authors: Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV)
Abstract
We present FFCV, a library for easy and fast machine learning model training. FFCV speeds up model training by eliminating (often subtle) data bottlenecks from the training process. In particular, we combine techniques such as an efficient file storage format, caching, data pre-loading, asynchronous data transfer, and just-in-time compilation to (a) make data loading and transfer significantly more efficient, ensuring that GPUs can reach full utilization; and (b) offload as much data processing as possible to the CPU asynchronously, freeing GPU cycles for training. Using FFCV, we train ResNet-18 and ResNet-50 on the ImageNet dataset with competitive tradeoff between accuracy and training time. For example, we are able to train an ImageNet ResNet-50 model to 75\% in only 20 mins on a single machine. We demonstrate FFCV's performance, ease-of-use, extensibility, and ability to adapt to resource constraints through several case studies. Detailed installation instructions, documentation, and Slack support channel are available at https://ffcv.io/ .
NTT-Based Polynomial Modular Multiplication for Homomorphic Encryption: A Tutorial
Authors: Sin-Wei Chiu, Keshab K. Parhi
Subjects: Cryptography and Security (cs.CR); Signal Processing (eess.SP)
Abstract
Homomorphic Encryption (HE) allows any third party to operate on the encrypted data without decrypting it in advance. For the majority of HE schemes, the multiplicative depth of circuits is the main practical limitation in performing computations over encrypted data. Hence, Homomorphic multiplication is one of the most important components of homomorphic encryption. Since most of the HE schemes are constructed from the ring-learning with errors (R-LWE) problem. Efficient polynomial modular multiplication implementation becomes critical. This work consists of describing various approaches to implementing polynomial modular multiplication based on number theoretic transform.
Cloud-Native Architectural Characteristics and their Impacts on Software Quality: A Validation Survey
Authors: Robin Lichtenthäler, Jonas Fritzsch, Guido Wirtz
Abstract
Cloud-native architectures are often based on microservices and combine different aspects that aim to leverage the capabilities of cloud platforms for software development. Cloud-native architectural characteristics like patterns and best practices aim to design, develop, deploy, and operate such systems efficiently with minimal time and effort. However, architects and developers are faced with the challenge of applying such characteristics in a targeted manner to improve selected quality attributes. Hence, we aim to investigate relationships, or more specifically impacts, between architectural characteristics of cloud-native applications, and quality aspects. The architectural characteristics in consideration are based on our recently proposed quality model for cloud-native software architectures. To validate its elements and revise this literature-based quality model, we conducted a questionnaire-based survey among 42 software professionals. While the survey results reinforce the quality model to a fair extent, they also indicate parts requiring a revision. Thus, as an additional contribution, we present an updated version of the quality model incorporating the survey results. Practitioners will benefit from our work when designing and developing cloud-native applications in a quality-oriented way. Researchers will moreover profit from our specifically developed questionnaire-based survey tool, which allows surveying complex structures like a hierarchical quality model.
An efficient and straightforward online quantization method for a data stream through remove-birth updating
Abstract
The growth of network-connected devices is creating an explosion of data, known as big data, and posing significant challenges to efficient data analysis. This data is generated continuously, creating a dynamic flow known as a data stream. The characteristics of a data stream may change dynamically, and this change is known as concept drift. Consequently, a method for handling data streams must efficiently reduce their volume while dynamically adapting to these changing characteristics. This paper proposes a simple online vector quantization method for concept drift. The proposed method identifies and replaces units with low win probability through remove-birth updating, thus achieving a rapid adaptation to concept drift. Furthermore, the results of this study show that the proposed method can generate minimal dead units even in the presence of concept drift. This study also suggests that some metrics calculated from the proposed method will be helpful for drift detection.
Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy
Authors: Hyunwoo Song, Shuojue Yang, Zijian Wu, Hamid Moradi, Russell H. Taylor, Jin U. Kang, Septimiu E. Salcudean, Emad M. Boctor
Abstract
Purpose: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modelity in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker method, where the ultrasound / photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument Methods: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principles of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. Results: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics. The result is also comparable with our previous approach, and the proposed method can be implemented with a normal white light stereo camera and doesn't require highly accurate localization of the PM. Conclusion: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.
A Hierarchical Approach to exploiting Multiple Datasets from TalkBank
Authors: Man Ho Wong
Subjects: Databases (cs.DB); Computation and Language (cs.CL)
Abstract
TalkBank is an online database that facilitates the sharing of linguistics research data. However, the existing TalkBank's API has limited data filtering and batch processing capabilities. To overcome these limitations, this paper introduces a pipeline framework that employs a hierarchical search approach, enabling efficient complex data selection. This approach involves a quick preliminary screening of relevant corpora that a researcher may need, and then perform an in-depth search for target data based on specific criteria. The identified files are then indexed, providing easier access for future analysis. Furthermore, the paper demonstrates how data from different studies curated with the framework can be integrated by standardizing and cleaning metadata, allowing researchers to extract insights from a large, integrated dataset. While being designed for TalkBank, the framework can also be adapted to process data from other open-science platforms.
Constant Memory Attention Block
Authors: Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed
Abstract
Modern foundation model architectures rely on attention mechanisms to effectively capture context. However, these methods require linear or quadratic memory in terms of the number of inputs/datapoints, limiting their applicability in low-compute domains. In this work, we propose Constant Memory Attention Block (CMAB), a novel general-purpose attention block that computes its output in constant memory and performs updates in constant computation. Highlighting CMABs efficacy, we introduce methods for Neural Processes and Temporal Point Processes. Empirically, we show our proposed methods achieve results competitive with state-of-the-art while being significantly more memory efficient.
Solving time-dependent PDEs with the ultraspherical spectral method
Abstract
We apply the ultraspherical spectral method to solving time-dependent PDEs by proposing two approaches to discretization based on the method of lines and show that these approaches produce approximately same results. We analyze the stability, the error, and the computational cost of the proposed method. In addition, we show how adaptivity can be incorporated to offer adequate spatial resolution efficiently. Both linear and nonlinear problems are considered. We also explore time integration using exponential integrators with the ultraspherical spatial discretization. Comparisons with the Chebyshev pseudospectral method are given along the discussion and they show that the ultraspherical spectral method is a competitive candidate for the spatial discretization of time-dependent PDEs.
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter
Abstract
Visual foundation models like CLIP excel in learning feature representations from extensive datasets through self-supervised methods, demonstrating remarkable transfer learning and generalization capabilities. A growing number of applications based on visual foundation models are emerging, including innovative solutions such as BLIP-2. These applications employ pre-trained CLIP models as upstream feature extractors and train various downstream modules to accomplish diverse tasks. In situations involving system upgrades that require updating the upstream foundation model, it becomes essential to re-train all downstream modules to adapt to the new foundation model, which is inflexible and inefficient. In this paper, we introduce a parameter-efficient and task-agnostic adapter, dubbed TaCA, that facilitates compatibility across distinct foundation models while ensuring enhanced performance for the new models. TaCA allows downstream applications to seamlessly integrate better-performing foundation models without necessitating retraining. We conduct extensive experimental validation of TaCA using different scales of models with up to one billion parameters on various tasks such as video-text retrieval, video recognition, and visual question answering. The results consistently demonstrate the emergent ability of TaCA on hot-plugging upgrades for visual foundation models. Codes and models will be available at https://github.com/TencentARC/TaCA.
Probabilistic Slide-support Manipulation Planning in Clutter
Abstract
To safely and efficiently extract an object from the clutter, this paper presents a bimanual manipulation planner in which one hand of the robot is used to slide the target object out of the clutter while the other hand is used to support the surrounding objects to prevent the clutter from collapsing. Our method uses a neural network to predict the physical phenomena of the clutter when the target object is moved. We generate the most efficient action based on the Monte Carlo tree search.The grasping and sliding actions are planned to minimize the number of motion sequences to pick the target object. In addition, the object to be supported is determined to minimize the position change of surrounding objects. Experiments with a real bimanual robot confirmed that the robot could retrieve the target object, reducing the total number of motion sequences and improving safety.
High order entropy stable discontinuous Galerkin spectral element methods through subcell limiting
Abstract
Subcell limiting strategies for discontinuous Galerkin spectral element methods do not provably satisfy a semi-discrete cell entropy inequality. In this work, we introduce an extension to the subcell limiting strategy that satisfies the semi-discrete cell entropy inequality by formulating the limiting factors as solutions to an optimization problem. The optimization problem is efficiently solved using a deterministic greedy algorithm. We also discuss the extension of the proposed subcell limiting strategy to preserve general convex constraints. Numerical experiments confirm that the proposed limiting strategy preserves high-order accuracy for smooth solutions and satisfies the cell entropy inequality.
Abstract
Soft object manipulation tasks in domestic scenes pose a significant challenge for existing robotic skill learning techniques due to their complex dynamics and variable shape characteristics. Since learning new manipulation skills from human demonstration is an effective way for robot applications, developing prior knowledge of the representation and dynamics of soft objects is necessary. In this regard, we propose a pre-trained soft object manipulation skill learning model, namely SoftGPT, that is trained using large amounts of exploration data, consisting of a three-dimensional heterogeneous graph representation and a GPT-based dynamics model. For each downstream task, a goal-oriented policy agent is trained to predict the subsequent actions, and SoftGPT generates the consequences of these actions. Integrating these two approaches establishes a thinking process in the robot's mind that provides rollout for facilitating policy learning. Our results demonstrate that leveraging prior knowledge through this thinking process can efficiently learn various soft object manipulation skills, with the potential for direct learning from human demonstrations.
Constructing Colloquial Dataset for Persian Sentiment Analysis of Social Microblogs
Authors: Mojtaba Mazoochi (ICT Research Institute, Tehran, Iran), Leyla Rabiei (Iran Telecommunication Research Center (ITRC), Tehran, Iran), Farzaneh Rahmani (Iran Telecommunication Research Center (ITRC), Tehran, Iran), Zeinab Rajabi (Iran Telecommunication Research Center (ITRC), Tehran, Iran)
Abstract
Introduction: Microblogging websites have massed rich data sources for sentiment analysis and opinion mining. In this regard, sentiment classification has frequently proven inefficient because microblog posts typically lack syntactically consistent terms and representatives since users on these social networks do not like to write lengthy statements. Also, there are some limitations to low-resource languages. The Persian language has exceptional characteristics and demands unique annotated data and models for the sentiment analysis task, which are distinctive from text features within the English dialect. Method: This paper first constructs a user opinion dataset called ITRC-Opinion by collaborative environment and insource way. Our dataset contains 60,000 informal and colloquial Persian texts from social microblogs such as Twitter and Instagram. Second, this study proposes a new deep convolutional neural network (CNN) model for more effective sentiment analysis of colloquial text in social microblog posts. The constructed datasets are used to evaluate the presented model. Furthermore, some models, such as LSTM, CNN-RNN, BiLSTM, and BiGRU with different word embeddings, including Fasttext, Glove, and Word2vec, investigated our dataset and evaluated the results. Results: The results demonstrate the benefit of our dataset and the proposed model (72% accuracy), displaying meaningful improvement in sentiment classification performance.
Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation
Abstract
We develop an approach to efficiently grow neural networks, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics. Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network. To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents. Experimental results show that our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original computation budget for training. We demonstrate that these gains translate into real wall-clock training speedups.
OptIForest: Optimal Isolation Forest for Anomaly Detection
Abstract
Anomaly detection plays an increasingly important role in various fields for critical tasks such as intrusion detection in cybersecurity, financial risk detection, and human health monitoring. A variety of anomaly detection methods have been proposed, and a category based on the isolation forest mechanism stands out due to its simplicity, effectiveness, and efficiency, e.g., iForest is often employed as a state-of-the-art detector for real deployment. While the majority of isolation forests use the binary structure, a framework LSHiForest has demonstrated that the multi-fork isolation tree structure can lead to better detection performance. However, there is no theoretical work answering the fundamentally and practically important question on the optimal tree structure for an isolation forest with respect to the branching factor. In this paper, we establish a theory on isolation efficiency to answer the question and determine the optimal branching factor for an isolation tree. Based on the theoretical underpinning, we design a practical optimal isolation forest OptIForest incorporating clustering based learning to hash which enables more information to be learned from data for better isolation quality. The rationale of our approach relies on a better bias-variance trade-off achieved by bias reduction in OptIForest. Extensive experiments on a series of benchmarking datasets for comparative and ablation studies demonstrate that our approach can efficiently and robustly achieve better detection performance in general than the state-of-the-arts including the deep learning based methods.
Generative Multimodal Entity Linking
Authors: Senbao Shi, Zhenran Xu, Baotian Hu, Min Zhang
Abstract
Multimodal Entity Linking (MEL) is the task of mapping mentions with multimodal contexts to the referent entities from a knowledge base (e.g., Wikipedia). Prior MEL methods mainly focus on designing complex multimodal interaction mechanisms and require fine-tuning all model parameters, which can be prohibitively costly and difficult to scale in the era of Large Language Models (LLMs). In this work, we propose GEMEL, a simple yet effective Generative Multimodal Entity Linking method, which leverages the capabilities of LLMs from large-scale pre-training to directly generate target entity names. We keep the vision and language model frozen and only train a linear layer to enable cross-modality interactions. To adapt LLMs to the MEL task, we take advantage of the emerging in-context learning (ICL) capability of LLMs by retrieving multimodal instances as demonstrations. Extensive experiments show that with only ~0.3% of the model parameters fine-tuned, GEMEL achieves state-of-the-art results on two well-established MEL datasets (4.1% accuracy gains on WikiDiverse and 15.4% accuracy gains on WikiMEL). Our approach is compatible with any off-the-shelf language model, paving the way towards an efficient and general solution for utilizing LLMs in the MEL task.
On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling
Authors: Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu
Abstract
A pooling operation is essential for effective graph-level representation learning, where the node drop pooling has become one mainstream graph pooling technology. However, current node drop pooling methods usually keep the top-k nodes according to their significance scores, which ignore the graph diversity in terms of the node features and the graph structures, thus resulting in suboptimal graph-level representations. To address the aforementioned issue, we propose a novel plug-and-play score scheme and refer to it as MID, which consists of a \textbf{M}ultidimensional score space with two operations, \textit{i.e.}, fl\textbf{I}pscore and \textbf{D}ropscore. Specifically, the multidimensional score space depicts the significance of nodes through multiple criteria; the flipscore encourages the maintenance of dissimilar node features; and the dropscore forces the model to notice diverse graph structures instead of being stuck in significant local structures. To evaluate the effectiveness of our proposed MID, we perform extensive experiments by applying it to a wide variety of recent node drop pooling methods, including TopKPool, SAGPool, GSAPool, and ASAP. Specifically, the proposed MID can efficiently and consistently achieve about 2.8\% average improvements over the above four methods on seventeen real-world graph classification datasets, including four social datasets (IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, and COLLAB), and thirteen biochemical datasets (D\&D, PROTEINS, NCI1, MUTAG, PTC-MR, NCI109, ENZYMES, MUTAGENICITY, FRANKENSTEIN, HIV, BBBP, TOXCAST, and TOX21). Code is available at~\url{https://github.com/whuchuang/mid}.
Ladder Fine-tuning approach for SAM integrating complementary network
Abstract
Recently, foundation models have been introduced demonstrating various tasks in the field of computer vision. These models such as Segment Anything Model (SAM) are generalized models trained using huge datasets. Currently, ongoing research focuses on exploring the effective utilization of these generalized models for specific domains, such as medical imaging. However, in medical imaging, the lack of training samples due to privacy concerns and other factors presents a major challenge for applying these generalized models to medical image segmentation task. To address this issue, the effective fine tuning of these models is crucial to ensure their optimal utilization. In this study, we propose to combine a complementary Convolutional Neural Network (CNN) along with the standard SAM network for medical image segmentation. To reduce the burden of fine tuning large foundation model and implement cost-efficient trainnig scheme, we focus only on fine-tuning the additional CNN network and SAM decoder part. This strategy significantly reduces trainnig time and achieves competitive results on publicly available dataset. The code is available at https://github.com/11yxk/SAM-LST.
Exploring the Range of Possible Outcomes by means of Logical Scenario Analysis and Reduction for Testing Automated Driving Systems
Abstract
With the implementation of the new EU regulation 2022/1426 regarding the type-approval of the automated driving system (ADS) of fully automated vehicles, scenario-based testing has gained significant importance in evaluating the performance and safety of advanced driver assistance systems and automated driving systems. However, the exploration and generation of concrete scenarios from a single logical scenario can often lead to a number of similar or redundant scenarios, which may not contribute to the testing goals. This paper focuses on the the goal to reduce the scenario set by clustering concrete scenarios from a single logical scenario. By employing clustering techniques, redundant and uninteresting scenarios can be identified and eliminated, resulting in a representative scenario set. This reduction allows for a more focused and efficient testing process, enabling the allocation of resources to the most relevant and critical scenarios. Furthermore, the identified clusters can provide valuable insights into the scenario space, revealing patterns and potential problems with the system's behavior.
Faster Compression of Deterministic Finite Automata
Authors: Philip Bille, Inge Li Gørtz, Max Rishøj Pedersen
Abstract
Deterministic finite automata (DFA) are a classic tool for high throughput matching of regular expressions, both in theory and practice. Due to their high space consumption, extensive research has been devoted to compressed representations of DFAs that still support efficient pattern matching queries. Kumar~et~al.~[SIGCOMM 2006] introduced the \emph{delayed deterministic finite automaton} (\ddfa{}) which exploits the large redundancy between inter-state transitions in the automaton. They showed it to obtain up to two orders of magnitude compression of real-world DFAs, and their work formed the basis of numerous subsequent results. Their algorithm, as well as later algorithms based on their idea, have an inherent quadratic-time bottleneck, as they consider every pair of states to compute the optimal compression. In this work we present a simple, general framework based on locality-sensitive hashing for speeding up these algorithms to achieve sub-quadratic construction times for \ddfa{}s. We apply the framework to speed up several algorithms to near-linear time, and experimentally evaluate their performance on real-world regular expression sets extracted from modern intrusion detection systems. We find an order of magnitude improvement in compression times, with either little or no loss of compression, or even significantly better compression in some cases.
HypeRS: Building a Hypergraph-driven ensemble Recommender System
Abstract
Recommender systems are designed to predict user preferences over collections of items. These systems process users' previous interactions to decide which items should be ranked higher to satisfy their desires. An ensemble recommender system can achieve great recommendation performance by effectively combining the decisions generated by individual models. In this paper, we propose a novel ensemble recommender system that combines predictions made by different models into a unified hypergraph ranking framework. This is the first time that hypergraph ranking has been employed to model an ensemble of recommender systems. Hypergraphs are generalizations of graphs where multiple vertices can be connected via hyperedges, efficiently modeling high-order relations. We differentiate real and predicted connections between users and items by assigning different hyperedge weights to individual recommender systems. We perform experiments using four datasets from the fields of movie, music and news media recommendation. The obtained results show that the ensemble hypergraph ranking method generates more accurate recommendations compared to the individual models and a weighted hybrid approach. The assignment of different hyperedge weights to the ensemble hypergraph further improves the performance compared to a setting with identical hyperedge weights.
Solving the complete pseudo-impulsive radiation and diffraction problem using a spectral element method
Authors: Jens Visbech, Allan P. Engsig-Karup, Harry B. Bingham
Abstract
This paper presents a novel, efficient, high-order accurate, and stable spectral element-based model for computing the complete three-dimensional linear radiation and diffraction problem for floating offshore structures. We present a solution to a pseudo-impulsive formulation in the time domain, where the frequency-dependent quantities, such as added mass, radiation damping, and wave excitation force for arbitrary heading angle, $\beta$, are evaluated using Fourier transforms from the tailored time-domain responses. The spatial domain is tessellated by an unstructured high-order hybrid configured mesh and represented by piece-wise polynomial basis functions in the spectral element space. Fourth-order accurate time integration is employed through an explicit four-stage Runge-Kutta method and complemented by fourth-order finite difference approximations for time differentiation. To reduce the computational burden, the model can make use of symmetry boundaries in the domain representation. The key piece of the numerical model -- the discrete Laplace solver -- is validated through $p$- and $h$-convergence studies. Moreover, to highlight the capabilities of the proposed model, we present prof-of-concept examples of simple floating bodies (a sphere and a box). Lastly, a much more involved case is performed of an oscillating water column, including generalized modes resembling the piston motion and wave sloshing effects inside the wave energy converter chamber. In this case, the spectral element model trivially computes the infinite-frequency added mass, which is a singular problem for conventional boundary element type solvers.
Efficient Partitioning Method of Large-Scale Public Safety Spatio-Temporal Data based on Information Loss Constraints
Authors: Jie Gao, Yawen Li, Zhe Xue, Zeli Guan
Subjects: Machine Learning (cs.LG); Information Retrieval (cs.IR)
Abstract
The storage, management, and application of massive spatio-temporal data are widely applied in various practical scenarios, including public safety. However, due to the unique spatio-temporal distribution characteristics of re-al-world data, most existing methods have limitations in terms of the spatio-temporal proximity of data and load balancing in distributed storage. There-fore, this paper proposes an efficient partitioning method of large-scale public safety spatio-temporal data based on information loss constraints (IFL-LSTP). The IFL-LSTP model specifically targets large-scale spatio-temporal point da-ta by combining the spatio-temporal partitioning module (STPM) with the graph partitioning module (GPM). This approach can significantly reduce the scale of data while maintaining the model's accuracy, in order to improve the partitioning efficiency. It can also ensure the load balancing of distributed storage while maintaining spatio-temporal proximity of the data partitioning results. This method provides a new solution for distributed storage of mas-sive spatio-temporal data. The experimental results on multiple real-world da-tasets demonstrate the effectiveness and superiority of IFL-LSTP.
Learning from Visual Observation via Offline Pretrained State-to-Go Transformer
Authors: Bohan Zhou, Ke Li, Jiechuan Jiang, Zongqing Lu
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Learning from visual observation (LfVO), aiming at recovering policies from only visual observation data, is promising yet a challenging problem. Existing LfVO approaches either only adopt inefficient online learning schemes or require additional task-specific information like goal states, making them not suited for open-ended tasks. To address these issues, we propose a two-stage framework for learning from visual observation. In the first stage, we introduce and pretrain State-to-Go (STG) Transformer offline to predict and differentiate latent transitions of demonstrations. Subsequently, in the second stage, the STG Transformer provides intrinsic rewards for downstream reinforcement learning tasks where an agent learns merely from intrinsic rewards. Empirical results on Atari and Minecraft show that our proposed method outperforms baselines and in some tasks even achieves performance comparable to the policy learned from environmental rewards. These results shed light on the potential of utilizing video-only data to solve difficult visual reinforcement learning tasks rather than relying on complete offline datasets containing states, actions, and rewards. The project's website and code can be found at https://sites.google.com/view/stgtransformer.
Towards Exascale CFD Simulations Using the Discontinuous Galerkin Solver FLEXI
Authors: Marcel Blind, Min Gao, Daniel Kempf, Patrick Kopper, Marius Kurz, Anna Schwarz, Andrea Beck
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Modern high-order discretizations bear considerable potential for the exascale era due to their high fidelity and the high, local computational load that allows for computational efficiency in massively parallel simulations. To this end, the discontinuous Galerkin (DG) framework FLEXI was selected to demonstrate exascale readiness within the Center of Excellence for Exascale CFD (CEEC) by simulating shock buffet on a three-dimensional wing segment at transsonic flight conditions. This paper summarizes the recent progress made to enable the simulation of this challenging exascale problem. For this, it is first demonstrated that FLEXI scales excellently to over 500 000 CPU cores on HAWK at the HLRS. To tackle the considerable resolution requirements near the wall, a novel wall model is proposed that takes compressibility effects into account and yields decent results for the simulation of a NACA 64A-110 airfoil. To address the shocks in the domain, a finite-volume-based shock capturing method was implemented in FLEXI, which is validated here using the simulation of a linear compressor cascade at supersonic flow conditions, where the method is demonstrated to yield efficient, robust and accurate results. Lastly, we present the TensorFlow-Fortran-Binding (TFFB) as an easy-to-use library to deploy trained machine learning models in Fortran solvers such as FLEXI.
Map Point Selection for Visual SLAM
Authors: Christiaan J. Müller, Corné E. van Daalen
Abstract
Simultaneous localisation and mapping (SLAM) play a vital role in autonomous robotics. Robotic platforms are often resource-constrained, and this limitation motivates resource-efficient SLAM implementations. While sparse visual SLAM algorithms offer good accuracy for modest hardware requirements, even these more scalable sparse approaches face limitations when applied to large-scale and long-term scenarios. A contributing factor is that the point clouds resulting from SLAM are inefficient to use and contain significant redundancy. This paper proposes the use of subset selection algorithms to reduce the map produced by sparse visual SLAM algorithms. Information-theoretic techniques have been applied to simpler related problems before, but they do not scale if applied to the full visual SLAM problem. This paper proposes a number of novel information\hyp{}theoretic utility functions for map point selection and optimises these functions using greedy algorithms. The reduced maps are evaluated using practical data alongside an existing visual SLAM implementation (ORB-SLAM 2). Approximate selection techniques proposed in this paper achieve trajectory accuracy comparable to an offline baseline while being suitable for online use. These techniques enable the practical reduction of maps for visual SLAM with competitive trajectory accuracy. Results also demonstrate that SLAM front-end performance can significantly impact the performance of map point selection. This shows the importance of testing map point selection with a front-end implementation. To exploit this, this paper proposes an approach that includes a model of the front-end in the utility function when additional information is available. This approach outperforms alternatives on applicable datasets and highlights future research directions.
Multi-Objective Hull Form Optimization with CAD Engine-based Deep Learning Physics for 3D Flow Prediction
Authors: Jocelyn Ahmed Mazari, Antoine Reverberi, Pierre Yser, Sebastian Sigmund
Abstract
In this work, we propose a built-in Deep Learning Physics Optimization (DLPO) framework to set up a shape optimization study of the Duisburg Test Case (DTC) container vessel. We present two different applications: (1) sensitivity analysis to detect the most promising generic basis hull shapes, and (2) multi-objective optimization to quantify the trade-off between optimal hull forms. DLPO framework allows for the evaluation of design iterations automatically in an end-to-end manner. We achieved these results by coupling Extrality's Deep Learning Physics (DLP) model to a CAD engine and an optimizer. Our proposed DLP model is trained on full 3D volume data coming from RANS simulations, and it can provide accurate and high-quality 3D flow predictions in real-time, which makes it a good evaluator to perform optimization of new container vessel designs w.r.t the hydrodynamic efficiency. In particular, it is able to recover the forces acting on the vessel by integration on the hull surface with a mean relative error of 3.84\% \pm 2.179\% on the total resistance. Each iteration takes only 20 seconds, thus leading to a drastic saving of time and engineering efforts, while delivering valuable insight into the performance of the vessel, including RANS-like detailed flow information. We conclude that DLPO framework is a promising tool to accelerate the ship design process and lead to more efficient ships with better hydrodynamic performance.
Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model
Abstract
We consider the problem of recovering hidden communities in the Labeled Stochastic Block Model (LSBM) with a finite number of clusters, where cluster sizes grow linearly with the total number $n$ of items. In the LSBM, a label is (independently) observed for each pair of items. Our objective is to devise an efficient algorithm that recovers clusters using the observed labels. To this end, we revisit instance-specific lower bounds on the expected number of misclassified items satisfied by any clustering algorithm. We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability. IAC consists of a one-time spectral clustering algorithm followed by an iterative likelihood-based cluster assignment improvement. This approach is based on the instance-specific lower bound and does not require any model parameters, including the number of clusters. By performing the spectral clustering only once, IAC maintains an overall computational complexity of $\mathcal{O}(n \text{polylog}(n))$. We illustrate the effectiveness of our approach through numerical experiments.
An Energy Stable Discontinuous Galerkin Time-Domain Finite Element Method in Optics and Photonics
Abstract
In this paper, a time-domain discontinuous Galerkin (TDdG) finite element method for the full system of Maxwell's equations in optics and photonics is investigated, including a complete proof of a semi-discrete error estimate. The new capabilities of methods of this type are to efficiently model linear and nonlinear effects, for example of Kerr nonlinearities. Energy stable discretizations both at the semi-discrete and the fully discrete levels are presented. In particular, the proposed semi-discrete scheme is optimally convergent in the spatial variable on Cartesian meshes with $Q_k$-type elements, and the fully discrete scheme is conditionally stable with respect to a specially defined nonlinear electromagnetic energy. The approaches presented prove to be robust and allow the modeling of optical problems and the treatment of complex nonlinearities as well as geometries of various physical systems coupled with electromagnetic fields.
Rate-Splitting Multiple Access for 6G Networks: Ten Promising Scenarios and Applications
Authors: Jeonghun Park, Byungju Lee, Jinseok Choi, Hoon Lee, Namyoon Lee, Seok-Hwan Park, Kyoung-Jae Lee, Junil Choi, Sung Ho Chae, Sang-Woon Jeon, Kyung Sup Kwak, Bruno Clerckx, Wonjae Shin
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
In the upcoming 6G era, multiple access (MA) will play an essential role in achieving high throughput performances required in a wide range of wireless applications. Since MA and interference management are closely related issues, the conventional MA techniques are limited in that they cannot provide near-optimal performance in universal interference regimes. Recently, rate-splitting multiple access (RSMA) has been gaining much attention. RSMA splits an individual message into two parts: a common part, decodable by every user, and a private part, decodable only by the intended user. Each user first decodes the common message and then decodes its private message by applying successive interference cancellation (SIC). By doing so, RSMA not only embraces the existing MA techniques as special cases but also provides significant performance gains by efficiently mitigating inter-user interference in a broad range of interference regimes. In this article, we first present the theoretical foundation of RSMA. Subsequently, we put forth four key benefits of RSMA: spectral efficiency, robustness, scalability, and flexibility. Upon this, we describe how RSMA can enable ten promising scenarios and applications along with future research directions to pave the way for 6G.
Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping
Abstract
Reinforcement learning often needs to deal with the exponential growth of states and actions when exploring optimal control in high-dimensional spaces (often known as the curse of dimensionality). In this work, we address this issue by learning the inherent structure of action-wise similar MDP to appropriately balance the performance degradation versus sample/computational complexity. In particular, we partition the action spaces into multiple groups based on the similarity in transition distribution and reward function, and build a linear decomposition model to capture the difference between the intra-group transition kernel and the intra-group rewards. Both our theoretical analysis and experiments reveal a \emph{surprising and counter-intuitive result}: while a more refined grouping strategy can reduce the approximation error caused by treating actions in the same group as identical, it also leads to increased estimation error when the size of samples or the computation resources is limited. This finding highlights the grouping strategy as a new degree of freedom that can be optimized to minimize the overall performance loss. To address this issue, we formulate a general optimization problem for determining the optimal grouping strategy, which strikes a balance between performance loss and sample/computational complexity. We further propose a computationally efficient method for selecting a nearly-optimal grouping strategy, which maintains its computational complexity independent of the size of the action space.
Affine Correspondences between Multi-Camera Systems for Relative Pose Estimation
Authors: Banglei Guan, Ji Zhao
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
We present a novel method to compute the relative pose of multi-camera systems using two affine correspondences (ACs). Existing solutions to the multi-camera relative pose estimation are either restricted to special cases of motion, have too high computational complexity, or require too many point correspondences (PCs). Thus, these solvers impede an efficient or accurate relative pose estimation when applying RANSAC as a robust estimator. This paper shows that the 6DOF relative pose estimation problem using ACs permits a feasible minimal solution, when exploiting the geometric constraints between ACs and multi-camera systems using a special parameterization. We present a problem formulation based on two ACs that encompass two common types of ACs across two views, i.e., inter-camera and intra-camera. Moreover, the framework for generating the minimal solvers can be extended to solve various relative pose estimation problems, e.g., 5DOF relative pose estimation with known rotation angle prior. Experiments on both virtual and real multi-camera systems prove that the proposed solvers are more efficient than the state-of-the-art algorithms, while resulting in a better relative pose accuracy. Source code is available at https://github.com/jizhaox/relpose-mcs-depth.
Sixth-Order Hybrid FDMs and/or the M-Matrix Property for Elliptic Interface Problems with Mixed Boundary Conditions
Abstract
In this paper, we develop sixth-order hybrid finite difference methods (FDMs) for the elliptic interface problem $-\nabla \cdot( a\nabla u)=f$ in $\Omega\backslash \Gamma$, where $\Gamma$ is a smooth interface inside $\Omega$. The variable scalar coefficient $a>0$ and source $f$ are possibly discontinuous across $\Gamma$. The hybrid FDMs utilize a 9-point compact stencil at any interior regular point of the grid and a 13-point stencil at irregular points near $\Gamma$. For interior regular points away from $\Gamma$, we obtain a sixth-order 9-point compact FDM satisfying the M-matrix property. Consequently, for the elliptic problem without interface (i.e., $\Gamma$ is empty), our compact FDM satisfies the discrete maximum principle, which guarantees the theoretical sixth-order convergence. We also derive sixth-order compact (4-point for corners and 6-point for edges) FDMs having the M-matrix property at any boundary point subject to (mixed) Dirichlet/Neumann/Robin boundary conditions. For irregular points near $\Gamma$, we propose fifth-order 13-point FDMs, whose stencil coefficients can be effectively calculated by recursively solving several small linear systems. Theoretically, the proposed high order FDMs use high order (partial) derivatives of the coefficient $a$, the source term $f$, the interface curve $\Gamma$, the two jump functions along $\Gamma$, and the functions on $\partial \Omega$. Numerically, we always use function values to approximate all required high order (partial) derivatives in our hybrid FDMs without losing accuracy. Our proposed FDMs are independent of the choice representing $\Gamma$ and are also applicable if the jump conditions on $\Gamma$ only depend on the geometry (e.g., curvature) of the curve $\Gamma$. Our numerical experiments confirm the sixth-order convergence in the $l_{\infty}$ norm of the proposed hybrid FDMs for the elliptic interface problem.
AugDMC: Data Augmentation Guided Deep Multiple Clustering
Authors: Jiawei Yao, Enbei Liu, Maham Rashid, Juhua Hu
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Clustering aims to group similar objects together while separating dissimilar ones apart. Thereafter, structures hidden in data can be identified to help understand data in an unsupervised manner. Traditional clustering methods such as k-means provide only a single clustering for one data set. Deep clustering methods such as auto-encoder based clustering methods have shown a better performance, but still provide a single clustering. However, a given dataset might have multiple clustering structures and each represents a unique perspective of the data. Therefore, some multiple clustering methods have been developed to discover multiple independent structures hidden in data. Although deep multiple clustering methods provide better performance, how to efficiently capture the alternative perspectives in data is still a problem. In this paper, we propose AugDMC, a novel data Augmentation guided Deep Multiple Clustering method, to tackle the challenge. Specifically, AugDMC leverages data augmentations to automatically extract features related to a certain aspect of the data using a self-supervised prototype-based representation learning, where different aspects of the data can be preserved under different data augmentations. Moreover, a stable optimization strategy is proposed to alleviate the unstable problem from different augmentations. Thereafter, multiple clusterings based on different aspects of the data can be obtained. Experimental results on three real-world datasets compared with state-of-the-art methods validate the effectiveness of the proposed method.
Transferable Curricula through Difficulty Conditioned Generators
Abstract
Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality.
Analysing Mechanisms for Virtual Channel Management in Low-Diameter networks
Authors: Alejandro Cano, Cristóbal Camarero, Carmen Martínez, Ramón Beivide
Subjects: Hardware Architecture (cs.AR); Distributed, Parallel, and Cluster Computing (cs.DC); Networking and Internet Architecture (cs.NI)
Abstract
To interconnect their growing number of servers, current supercomputers and data centers are starting to adopt low-diameter networks, such as HyperX, Dragonfly and Dragonfly+. These emergent topologies require balancing the load over their links and finding suitable non-minimal routing mechanisms for them becomes particularly challenging. The Valiant load balancing scheme is a very popular choice for non-minimal routing. Evolved adaptive routing mechanisms implemented in real systems are based on this Valiant scheme. All these low-diameter networks are deadlock-prone when non-minimal routing is employed. Routing deadlocks occur when packets cannot progress due to cyclic dependencies. Therefore, developing efficient deadlock-free packet routing mechanisms is critical for the progress of these emergent networks. The routing function includes the routing algorithm for path selection and the buffers management policy that dictates how packets allocate the buffers of the switches on their paths. For the same routing algorithm, a different buffer management mechanism can lead to a very different performance. Moreover, certain mechanisms considered efficient for avoiding deadlocks, may still suffer from hard to pinpoint instabilities that make erratic the network response. This paper focuses on exploring the impact of these buffers management policies on the performance of current interconnection networks, showing a 90\% of performance drop if an incorrect buffers management policy is used. Moreover, this study not only characterizes some of these undesirable scenarios but also proposes practicable solutions.
Abstract
Deep Metric Learning (DML) models rely on strong representations and similarity-based measures with specific loss functions. Proxy-based losses have shown great performance compared to pair-based losses in terms of convergence speed. However, proxies that are assigned to different classes may end up being closely located in the embedding space and hence having a hard time to distinguish between positive and negative items. Alternatively, they may become highly correlated and hence provide redundant information with the model. To address these issues, we propose a novel approach that introduces Soft Orthogonality (SO) constraint on proxies. The constraint ensures the proxies to be as orthogonal as possible and hence control their positions in the embedding space. Our approach leverages Data-Efficient Image Transformer (DeiT) as an encoder to extract contextual features from images along with a DML objective. The objective is made of the Proxy Anchor loss along with the SO regularization. We evaluate our method on four public benchmarks for category-level image retrieval and demonstrate its effectiveness with comprehensive experimental results and ablation studies. Our evaluations demonstrate the superiority of our proposed approach over state-of-the-art methods by a significant margin.
Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting
Abstract
Most offline reinforcement learning (RL) algorithms return a target policy maximizing a trade-off between (1) the expected performance gain over the behavior policy that collected the dataset, and (2) the risk stemming from the out-of-distribution-ness of the induced state-action occupancy. It follows that the performance of the target policy is strongly related to the performance of the behavior policy and, thus, the trajectory return distribution of the dataset. We show that in mixed datasets consisting of mostly low-return trajectories and minor high-return trajectories, state-of-the-art offline RL algorithms are overly restrained by low-return trajectories and fail to exploit high-performing trajectories to the fullest. To overcome this issue, we show that, in deterministic MDPs with stochastic initial states, the dataset sampling can be re-weighted to induce an artificial dataset whose behavior policy has a higher return. This re-weighted sampling strategy may be combined with any offline RL algorithm. We further analyze that the opportunity for performance improvement over the behavior policy correlates with the positive-sided variance of the returns of the trajectories in the dataset. We empirically show that while CQL, IQL, and TD3+BC achieve only a part of this potential policy improvement, these same algorithms combined with our reweighted sampling strategy fully exploit the dataset. Furthermore, we empirically demonstrate that, despite its theoretical limitation, the approach may still be efficient in stochastic environments. The code is available at https://github.com/Improbable-AI/harness-offline-rl.
PromptIR: Prompting for All-in-One Blind Image Restoration
Abstract
Image restoration involves recovering a high-quality clean image from its degraded version. Deep learning-based methods have significantly improved image restoration performance, however, they have limited generalization ability to different degradation types and levels. This restricts their real-world application since it requires training individual models for each specific degradation and knowing the input degradation type to apply the relevant model. We present a prompt-based learning approach, PromptIR, for All-In-One image restoration that can effectively restore images from various types and levels of degradation. In particular, our method uses prompts to encode degradation-specific information, which is then used to dynamically guide the restoration network. This allows our method to generalize to different degradation types and levels, while still achieving state-of-the-art results on image denoising, deraining, and dehazing. Overall, PromptIR offers a generic and efficient plugin module with few lightweight prompts that can be used to restore images of various types and levels of degradation with no prior information on the corruptions present in the image. Our code and pretrained models are available here: https://github.com/va1shn9v/PromptIR
Keyword: faster
Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons
Authors: Sidi Yaya Arnaud Yarga, Sean U. N. Wood
Subjects: Neural and Evolutionary Computing (cs.NE)
Abstract
Spiking neural networks (SNN) are able to learn spatiotemporal features while using less energy, especially on neuromorphic hardware. The most widely used spiking neuron in deep learning is the Leaky Integrate and Fire (LIF) neuron. LIF neurons operate sequentially, however, since the computation of state at time t relies on the state at time t-1 being computed. This limitation is shared with Recurrent Neural Networks (RNN) and results in slow training on Graphics Processing Units (GPU). In this paper, we propose the Stochastic Parallelizable Spiking Neuron (SPSN) to overcome the sequential training limitation of LIF neurons. By separating the linear integration component from the non-linear spiking function, SPSN can be run in parallel over time. The proposed approach results in performance comparable with the state-of-the-art for feedforward neural networks on the Spiking Heidelberg Digits (SHD) dataset, outperforming LIF networks while training 10 times faster and outperforming non-spiking networks with the same network architecture. For longer input sequences of 10000 time-steps, we show that the proposed approach results in 4000 times faster training, thus demonstrating the potential of the proposed approach to accelerate SNN training for very large datasets.
NoisyILRMA: Diffuse-Noise-Aware Independent Low-Rank Matrix Analysis for Fast Blind Source Extraction
Abstract
In this paper, we address the multichannel blind source extraction (BSE) of a single source in diffuse noise environments. To solve this problem even faster than by fast multichannel nonnegative matrix factorization (FastMNMF) and its variant, we propose a BSE method called NoisyILRMA, which is a modification of independent low-rank matrix analysis (ILRMA) to account for diffuse noise. NoisyILRMA can achieve considerably fast BSE by incorporating an algorithm developed for independent vector extraction. In addition, to improve the BSE performance of NoisyILRMA, we propose a mechanism to switch the source model with ILRMA-like nonnegative matrix factorization to a more expressive source model during optimization. In the experiment, we show that NoisyILRMA runs faster than a FastMNMF algorithm while maintaining the BSE performance. We also confirm that the switching mechanism improves the BSE performance of NoisyILRMA.
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
Abstract
We present a new dataset condensation framework termed Squeeze, Recover and Relabel (SRe$^2$L) that decouples the bilevel optimization of model and synthetic data during training, to handle varying scales of datasets, model architectures and image resolutions for effective dataset condensation. The proposed method demonstrates flexibility across diverse dataset scales and exhibits multiple advantages in terms of arbitrary resolutions of synthesized images, low training cost and memory consumption with high-resolution training, and the ability to scale up to arbitrary evaluation network architectures. Extensive experiments are conducted on Tiny-ImageNet and full ImageNet-1K datasets. Under 50 IPC, our approach achieves the highest 42.5% and 60.8% validation accuracy on Tiny-ImageNet and ImageNet-1K, outperforming all previous state-of-the-art methods by margins of 14.5% and 32.9%, respectively. Our approach also outperforms MTT by approximately 52$\times$ (ConvNet-4) and 16$\times$ (ResNet-18) faster in speed with less memory consumption of 11.6$\times$ and 6.4$\times$ during data synthesis. Our code and condensed datasets of 50, 200 IPC with 4K recovery budget are available at https://zeyuanyin.github.io/projects/SRe2L/.
Keyword: mobile
Slimmable Encoders for Flexible Split DNNs in Bandwidth and Resource Constrained IoT Systems
Authors: Juliano S. Assine, J. C. S. Santos Filho, Eduardo Valle, Marco Levorato
Abstract
The execution of large deep neural networks (DNN) at mobile edge devices requires considerable consumption of critical resources, such as energy, while imposing demands on hardware capabilities. In approaches based on edge computing the execution of the models is offloaded to a compute-capable device positioned at the edge of 5G infrastructures. The main issue of the latter class of approaches is the need to transport information-rich signals over wireless links with limited and time-varying capacity. The recent split computing paradigm attempts to resolve this impasse by distributing the execution of DNN models across the layers of the systems to reduce the amount of data to be transmitted while imposing minimal computing load on mobile devices. In this context, we propose a novel split computing approach based on slimmable ensemble encoders. The key advantage of our design is the ability to adapt computational load and transmitted data size in real-time with minimal overhead and time. This is in contrast with existing approaches, where the same adaptation requires costly context switching and model loading. Moreover, our model outperforms existing solutions in terms of compression efficacy and execution time, especially in the context of weak mobile devices. We present a comprehensive comparison with the most advanced split computing solutions, as well as an experimental evaluation on GPU-less devices.
Accuracy evaluation of a Low-Cost Differential Global Positioning System for mobile robotics
Authors: Christian Blesing, Jan Finke, Sebastian Hoose, Anneliese Schweigert, Jonas Stenzel
Subjects: Robotics (cs.RO); Systems and Control (eess.SY)
Abstract
Differential GPS, commonly referred as DGPS, is a well-known and very accurate localization system for many outdoor applications in particular for mobile outdoor robotics. The most common drawback of DGPS systems are the high costs for both base station and receivers. In this paper, we present a setup that uses third-party open-source software and a Ublox ZED-F9P chip to build a ROS-enabled low-cost DGPS setup that is ready to use in a few hours. The main goal of this paper is to analyze and evaluate the repetitive and absolute accuracy of the system. The first measurement also examines the differences between a SAPOS base station and a locally installed one consisting of low-cost components. During the evaluation process of the absolute accuracy, a moving mobile robot is used on the receiver side. It is tracked through a highly accurate VICON motion capture system.
A new 3-DOF 2T1R parallel mechanism: Topology design and kinematics
Authors: Huiping Shen, Zhongqiu Du, Damien Chablat (LS2N - équipe ReV, LS2N), Ju Li, Guanglei Wu
Abstract
This article presents a new three-degree-of-freedom (3-DOF) parallel mechanism (PM) with two translations and one rotation (2T1R), designed based on the topological design theory of the parallel mechanism using position and orientation characteristics (POC). The PM is primarily intended for use in package sorting and delivery. The mobile platform of the PM moves along a translation axis, picks up objects from a conveyor belt, and tilts them to either side of the axis. We first calculate the PM's topological characteristics, such as the degree of freedom (DOF) and the degree of coupling, and provide its topological analytical formula to represent the topological information of the PM. Next, we solve the direct and inverse kinematic models based on the kinematic modelling principle using the topological features. The models are purely analytic and are broken down into a series of quadratic equations, making them suitable for use in an industrial robot. We also study the singular configurations to identify the serial and parallel singularities. Using the decoupling properties, we size the mechanism to address the package sorting and depositing problem using an algebraic approach. To determine the smallest segment lengths, we use a cylindrical algebraic decomposition to solve a system with inequalities.
Minimalist and High-Quality Panoramic Imaging with PSF-aware Transformers
Authors: Qi Jiang, Shaohua Gao, Yao Gao, Kailun Yang, Zhonghua Yi, Hao Shi, Lei Sun, Kaiwei Wang
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Optics (physics.optics)
Abstract
High-quality panoramic images with a Field of View (FoV) of 360-degree are essential for contemporary panoramic computer vision tasks. However, conventional imaging systems come with sophisticated lens designs and heavy optical components. This disqualifies their usage in many mobile and wearable applications where thin and portable, minimalist imaging systems are desired. In this paper, we propose a Panoramic Computational Imaging Engine (PCIE) to address minimalist and high-quality panoramic imaging. With less than three spherical lenses, a Minimalist Panoramic Imaging Prototype (MPIP) is constructed based on the design of the Panoramic Annular Lens (PAL), but with low-quality imaging results due to aberrations and small image plane size. We propose two pipelines, i.e. Aberration Correction (AC) and Super-Resolution and Aberration Correction (SR&AC), to solve the image quality problems of MPIP, with imaging sensors of small and large pixel size, respectively. To provide a universal network for the two pipelines, we leverage the information from the Point Spread Function (PSF) of the optical system and design a PSF-aware Aberration-image Recovery Transformer (PART), in which the self-attention calculation and feature extraction are guided via PSF-aware mechanisms. We train PART on synthetic image pairs from simulation and put forward the PALHQ dataset to fill the gap of real-world high-quality PAL images for low-level vision. A comprehensive variety of experiments on synthetic and real-world benchmarks demonstrates the impressive imaging results of PCIE and the effectiveness of plug-and-play PSF-aware mechanisms. We further deliver heuristic experimental findings for minimalist and high-quality panoramic imaging. Our dataset and code will be available at https://github.com/zju-jiangqi/PCIE-PART.
Investigating the Usability of Collaborative Robot control through Hands-Free Operation using Eye gaze and Augmented Reality
Abstract
This paper proposes a novel operation for controlling a mobile robot using a head-mounted device. Conventionally, robots are operated using computers or a joystick, which creates limitations in usability and flexibility because control equipment has to be carried by hand. This lack of flexibility may prevent workers from multitasking or carrying objects while operating the robot. To address this limitation, we propose a hands-free method to operate the mobile robot with a human gaze in an Augmented Reality (AR) environment. The proposed work is demonstrated using the HoloLens 2 to control the mobile robot, Robotnik Summit-XL, through the eye-gaze in AR. Stable speed control and navigation of the mobile robot were achieved through admittance control which was calculated using the gaze position. The experiment was conducted to compare the usability between the joystick and the proposed operation, and the results were validated through surveys (i.e., SUS, SEQ). The survey results from the participants after the experiments showed that the wearer of the HoloLens accurately operated the mobile robot in a collaborative manner. The results for both the joystick and the HoloLens were marked as easy to use with above-average usability. This suggests that the HoloLens can be used as a replacement for the joystick to allow hands-free robot operation and has the potential to increase the efficiency of human-robot collaboration in situations when hands-free controls are needed.
Keyword: pruning
Data-Free Backbone Fine-Tuning for Pruned Neural Networks
Authors: Adrian Holzbock, Achyut Hegde, Klaus Dietmayer, Vasileios Belagiannis
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Model compression techniques reduce the computational load and memory consumption of deep neural networks. After the compression operation, e.g. parameter pruning, the model is normally fine-tuned on the original training dataset to recover from the performance drop caused by compression. However, the training data is not always available due to privacy issues or other factors. In this work, we present a data-free fine-tuning approach for pruning the backbone of deep neural networks. In particular, the pruned network backbone is trained with synthetically generated images, and our proposed intermediate supervision to mimic the unpruned backbone's output feature map. Afterwards, the pruned backbone can be combined with the original network head to make predictions. We generate synthetic images by back-propagating gradients to noise images while relying on L1-pruning for the backbone pruning. In our experiments, we show that our approach is task-independent due to pruning only the backbone. By evaluating our approach on 2D human pose estimation, object detection, and image classification, we demonstrate promising performance compared to the unpruned model. Our code is available at https://github.com/holzbock/dfbf.
Abstract
Despite the proliferation of generative models, achieving fast sampling during inference without compromising sample diversity and quality remains challenging. Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps. The Denoising Diffusion Generative Adversarial Networks (DDGAN) attempted to circumvent this limitation by integrating a GAN model for larger jumps in the diffusion process. However, DDGAN encountered scalability limitations when applied to large datasets. To address these limitations, we introduce a novel approach that tackles the problem by matching implicit and explicit factors. More specifically, our approach involves utilizing an implicit model to match the marginal distributions of noisy data and the explicit conditional distribution of the forward diffusion. This combination allows us to effectively match the joint denoising distributions. Unlike DDPM but similar to DDGAN, we do not enforce a parametric distribution for the reverse step, enabling us to take large steps during inference. Similar to the DDPM but unlike DDGAN, we take advantage of the exact form of the diffusion process. We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
One at A Time: Multi-step Volumetric Probability Distribution Diffusion for Depth Estimation
Abstract
Recent works have explored the fundamental role of depth estimation in multi-view stereo (MVS) and semantic scene completion (SSC). They generally construct 3D cost volumes to explore geometric correspondence in depth, and estimate such volumes in a single step relying directly on the ground truth approximation. However, such problem cannot be thoroughly handled in one step due to complex empirical distributions, especially in challenging regions like occlusions, reflections, etc. In this paper, we formulate the depth estimation task as a multi-step distribution approximation process, and introduce a new paradigm of modeling the Volumetric Probability Distribution progressively (step-by-step) following a Markov chain with Diffusion models (VPDD). Specifically, to constrain the multi-step generation of volume in VPDD, we construct a meta volume guidance and a confidence-aware contextual guidance as conditional geometry priors to facilitate the distribution approximation. For the sampling process, we further investigate an online filtering strategy to maintain consistency in volume representations for stable training. Experiments demonstrate that our plug-and-play VPDD outperforms the state-of-the-arts for tasks of MVS and SSC, and can also be easily extended to different baselines to get improvement. It is worth mentioning that we are the first camera-based work that surpasses LiDAR-based methods on the SemanticKITTI dataset.
DiffWA: Diffusion Models for Watermark Attack
Authors: Xinyu Li
Subjects: Multimedia (cs.MM); Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
Abstract
With the rapid development of deep neural networks(DNNs), many robust blind watermarking algorithms and frameworks have been proposed and achieved good results. At present, the watermark attack algorithm can not compete with the watermark addition algorithm. And many watermark attack algorithms only care about interfering with the normal extraction of the watermark, and the watermark attack will cause great visual loss to the image. To this end, we propose DiffWA, a conditional diffusion model with distance guidance for watermark attack, which can restore the image while removing the embedded watermark. The core of our method is training an image-to-image conditional diffusion model on unwatermarked images and guiding the conditional model using a distance guidance when sampling so that the model will generate unwatermarked images which is similar to original images. We conducted experiments on CIFAR-10 using our proposed models. The results shows that the model can remove the watermark with good effect and make the bit error rate of watermark extraction higher than 0.4. At the same time, the attacked image will maintain good visual effect with PSNR more than 31 and SSIM more than 0.97 compared with the original image.
Abstract
The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. However, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer, compared to MDS matrices. In this paper, we study NMDS matrices, exploring their construction in both recursive and nonrecursive settings. We provide several theoretical results and explore the hardware efficiency of the construction of NMDS matrices. Additionally, we make comparisons between the results of NMDS and MDS matrices whenever possible. For the recursive approach, we study the DLS matrices and provide some theoretical results on their use. Some of the results are used to restrict the search space of the DLS matrices. We also show that over a field of characteristic 2, any sparse matrix of order $n\geq 4$ with fixed XOR value of 1 cannot be an NMDS when raised to a power of $k\leq n$. Following that, we use the generalized DLS (GDLS) matrices to provide some lightweight recursive NMDS matrices of several orders that perform better than the existing matrices in terms of hardware cost or the number of iterations. For the nonrecursive construction of NMDS matrices, we study various structures, such as circulant and left-circulant matrices, and their generalizations: Toeplitz and Hankel matrices. In addition, we prove that Toeplitz matrices of order $n>4$ cannot be simultaneously NMDS and involutory over a field of characteristic 2. Finally, we use GDLS matrices to provide some lightweight NMDS matrices that can be computed in one clock cycle. The proposed nonrecursive NMDS matrices of orders 4, 5, 6, 7, and 8 can be implemented with 24, 50, 65, 96, and 108 XORs over $\mathbb{F}_{2^4}$, respectively.
On the Direct Construction of MDS and Near-MDS Matrices
Abstract
The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. Consequently, various methods have been proposed for designing MDS matrices, including search and direct methods. While exhaustive search is suitable for small order MDS matrices, direct constructions are preferred for larger orders due to the vast search space involved. In the literature, there has been extensive research on the direct construction of MDS matrices using both recursive and nonrecursive methods. On the other hand, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer compared to MDS matrices. However, no direct construction method is available in the literature for constructing recursive NMDS matrices. This paper introduces some direct constructions of NMDS matrices in both nonrecursive and recursive settings. Additionally, it presents some direct constructions of nonrecursive MDS matrices from the generalized Vandermonde matrices. We propose a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices. Furthermore, we prove some folklore results that are used in the literature related to the NMDS code.
Towards More Realistic Membership Inference Attacks on Large Diffusion Models
Authors: Jan Dubiński, Antoni Kowalczuk, Stanisław Pawlak, Przemysław Rokita, Tomasz Trzciński, Paweł Morawiecki
Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Generative diffusion models, including Stable Diffusion and Midjourney, can generate visually appealing, diverse, and high-resolution images for various applications. These models are trained on billions of internet-sourced images, raising significant concerns about the potential unauthorized use of copyright-protected images. In this paper, we examine whether it is possible to determine if a specific image was used in the training set, a problem known in the cybersecurity community and referred to as a membership inference attack. Our focus is on Stable Diffusion, and we address the challenge of designing a fair evaluation framework to answer this membership question. We propose a methodology to establish a fair evaluation setup and apply it to Stable Diffusion, enabling potential extensions to other generative models. Utilizing this evaluation setup, we execute membership attacks (both known and newly introduced). Our research reveals that previously proposed evaluation setups do not provide a full understanding of the effectiveness of membership inference attacks. We conclude that the membership inference attack remains a significant challenge for large diffusion models (often deployed as black-box systems), indicating that related privacy and copyright issues will persist in the foreseeable future.
Continuous Layout Editing of Single Images with Diffusion Models
Authors: Zhiyuan Zhang, Zhitong Huang, Jing Liao
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recent advancements in large-scale text-to-image diffusion models have enabled many applications in image editing. However, none of these methods have been able to edit the layout of single existing images. To address this gap, we propose the first framework for layout editing of a single image while preserving its visual properties, thus allowing for continuous editing on a single image. Our approach is achieved through two key modules. First, to preserve the characteristics of multiple objects within an image, we disentangle the concepts of different objects and embed them into separate textual tokens using a novel method called masked textual inversion. Next, we propose a training-free optimization method to perform layout control for a pre-trained diffusion model, which allows us to regenerate images with learned concepts and align them with user-specified layouts. As the first framework to edit the layout of existing images, we demonstrate that our method is effective and outperforms other baselines that were modified to support this task. Our code will be freely available for public use upon acceptance.
Keyword: adaptive
MPSTAN: Metapopulation-based Spatio-Temporal Attention Network for Epidemic Forecasting
Authors: Junkai Mao, Yuexing Han, Bing Wang
Subjects: Machine Learning (cs.LG); Social and Information Networks (cs.SI)
Abstract
Accurate epidemic forecasting plays a vital role for governments in developing effective prevention measures for suppressing epidemics. Most of the present spatio-temporal models cannot provide a general framework for stable, and accurate forecasting of epidemics with diverse evolution trends. Incorporating epidemiological domain knowledge ranging from single-patch to multi-patch into neural networks is expected to improve forecasting accuracy. However, relying solely on single-patch knowledge neglects inter-patch interactions, while constructing multi-patch knowledge is challenging without population mobility data. To address the aforementioned problems, we propose a novel hybrid model called Metapopulation-based Spatio-Temporal Attention Network (MPSTAN). This model aims to improve the accuracy of epidemic forecasting by incorporating multi-patch epidemiological knowledge into a spatio-temporal model and adaptively defining inter-patch interactions. Moreover, we incorporate inter-patch epidemiological knowledge into both the model construction and loss function to help the model learn epidemic transmission dynamics. Extensive experiments conducted on two representative datasets with different epidemiological evolution trends demonstrate that our proposed model outperforms the baselines and provides more accurate and stable short- and long-term forecasting. We confirm the effectiveness of domain knowledge in the learning model and investigate the impact of different ways of integrating domain knowledge on forecasting. We observe that using domain knowledge in both model construction and loss functions leads to more efficient forecasting, and selecting appropriate domain knowledge can improve accuracy further.
MultiTASC: A Multi-Tenancy-Aware Scheduler for Cascaded DNN Inference at the Consumer Edge
Authors: Sokratis Nikolaidis, Stylianos I. Venieris, Iakovos S. Venieris
Subjects: Machine Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Cascade systems comprise a two-model sequence, with a lightweight model processing all samples and a heavier, higher-accuracy model conditionally refining harder samples to improve accuracy. By placing the light model on the device side and the heavy model on a server, model cascades constitute a widely used distributed inference approach. With the rapid expansion of intelligent indoor environments, such as smart homes, the new setting of Multi-Device Cascade is emerging where multiple and diverse devices are to simultaneously use a shared heavy model on the same server, typically located within or close to the consumer environment. This work presents MultiTASC, a multi-tenancy-aware scheduler that adaptively controls the forwarding decision functions of the devices in order to maximize the system throughput, while sustaining high accuracy and low latency. By explicitly considering device heterogeneity, our scheduler improves the latency service-level objective (SLO) satisfaction rate by 20-25 percentage points (pp) over state-of-the-art cascade methods in highly heterogeneous setups, while serving over 40 devices, showcasing its scalability.
Reinforcement Federated Learning Method Based on Adaptive OPTICS Clustering
Authors: Tianyu Zhao, Junping Du, Yingxia Shao, Zeli Guan
Abstract
Federated learning is a distributed machine learning technology, which realizes the balance between data privacy protection and data sharing computing. To protect data privacy, feder-ated learning learns shared models by locally executing distributed training on participating devices and aggregating local models into global models. There is a problem in federated learning, that is, the negative impact caused by the non-independent and identical distribu-tion of data across different user terminals. In order to alleviate this problem, this paper pro-poses a strengthened federation aggregation method based on adaptive OPTICS clustering. Specifically, this method perceives the clustering environment as a Markov decision process, and models the adjustment process of parameter search direction, so as to find the best clus-tering parameters to achieve the best federated aggregation method. The core contribution of this paper is to propose an adaptive OPTICS clustering algorithm for federated learning. The algorithm combines OPTICS clustering and adaptive learning technology, and can effective-ly deal with the problem of non-independent and identically distributed data across different user terminals. By perceiving the clustering environment as a Markov decision process, the goal is to find the best parameters of the OPTICS cluster without artificial assistance, so as to obtain the best federated aggregation method and achieve better performance. The reliability and practicability of this method have been verified on the experimental data, and its effec-tiveness and superiority have been proved.
HOFA: Twitter Bot Detection with Homophily-Oriented Augmentation and Frequency Adaptive Attention
Authors: Sen Ye, Zhaoxuan Tan, Zhenyu Lei, Ruijie He, Hongrui Wang, Qinghua Zheng, Minnan Luo
Abstract
Twitter bot detection has become an increasingly important and challenging task to combat online misinformation, facilitate social content moderation, and safeguard the integrity of social platforms. Though existing graph-based Twitter bot detection methods achieved state-of-the-art performance, they are all based on the homophily assumption, which assumes users with the same label are more likely to be connected, making it easy for Twitter bots to disguise themselves by following a large number of genuine users. To address this issue, we proposed HOFA, a novel graph-based Twitter bot detection framework that combats the heterophilous disguise challenge with a homophily-oriented graph augmentation module (Homo-Aug) and a frequency adaptive attention module (FaAt). Specifically, the Homo-Aug extracts user representations and computes a k-NN graph using an MLP and improves Twitter's homophily by injecting the k-NN graph. For the FaAt, we propose an attention mechanism that adaptively serves as a low-pass filter along a homophilic edge and a high-pass filter along a heterophilic edge, preventing user features from being over-smoothed by their neighborhood. We also introduce a weight guidance loss to guide the frequency adaptive attention module. Our experiments demonstrate that HOFA achieves state-of-the-art performance on three widely-acknowledged Twitter bot detection benchmarks, which significantly outperforms vanilla graph-based bot detection techniques and strong heterophilic baselines. Furthermore, extensive studies confirm the effectiveness of our Homo-Aug and FaAt module, and HOFA's ability to demystify the heterophilous disguise challenge.
Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model
Abstract
We consider the problem of recovering hidden communities in the Labeled Stochastic Block Model (LSBM) with a finite number of clusters, where cluster sizes grow linearly with the total number $n$ of items. In the LSBM, a label is (independently) observed for each pair of items. Our objective is to devise an efficient algorithm that recovers clusters using the observed labels. To this end, we revisit instance-specific lower bounds on the expected number of misclassified items satisfied by any clustering algorithm. We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability. IAC consists of a one-time spectral clustering algorithm followed by an iterative likelihood-based cluster assignment improvement. This approach is based on the instance-specific lower bound and does not require any model parameters, including the number of clusters. By performing the spectral clustering only once, IAC maintains an overall computational complexity of $\mathcal{O}(n \text{polylog}(n))$. We illustrate the effectiveness of our approach through numerical experiments.
Adaptive Bernstein Change Detector for High-Dimensional Data Streams
Authors: Marco Heyden, Edouard Fouché, Vadim Arzamasov, Tanja Fenn, Florian Kalinke, Klemens Böhm
Abstract
Change detection is of fundamental importance when analyzing data streams. Detecting changes both quickly and accurately enables monitoring and prediction systems to react, e.g., by issuing an alarm or by updating a learning algorithm. However, detecting changes is challenging when observations are high-dimensional. In high-dimensional data, change detectors should not only be able to identify when changes happen, but also in which subspace they occur. Ideally, one should also quantify how severe they are. Our approach, ABCD, has these properties. ABCD learns an encoder-decoder model and monitors its accuracy over a window of adaptive size. ABCD derives a change score based on Bernstein's inequality to detect deviations in terms of accuracy, which indicate changes. Our experiments demonstrate that ABCD outperforms its best competitor by at least 8% and up to 23% in F1-score on average. It can also accurately estimate changes' subspace, together with a severity measure that correlates with the ground truth.
Impacts and Risk of Generative AI Technology on Cyber Defense
Authors: Subash Neupane, Ivan A. Fernandez, Sudip Mittal, Shahram Rahimi
Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI)
Abstract
Generative Artificial Intelligence (GenAI) has emerged as a powerful technology capable of autonomously producing highly realistic content in various domains, such as text, images, audio, and videos. With its potential for positive applications in creative arts, content generation, virtual assistants, and data synthesis, GenAI has garnered significant attention and adoption. However, the increasing adoption of GenAI raises concerns about its potential misuse for crafting convincing phishing emails, generating disinformation through deepfake videos, and spreading misinformation via authentic-looking social media posts, posing a new set of challenges and risks in the realm of cybersecurity. To combat the threats posed by GenAI, we propose leveraging the Cyber Kill Chain (CKC) to understand the lifecycle of cyberattacks, as a foundational model for cyber defense. This paper aims to provide a comprehensive analysis of the risk areas introduced by the offensive use of GenAI techniques in each phase of the CKC framework. We also analyze the strategies employed by threat actors and examine their utilization throughout different phases of the CKC, highlighting the implications for cyber defense. Additionally, we propose GenAI-enabled defense strategies that are both attack-aware and adaptive. These strategies encompass various techniques such as detection, deception, and adversarial training, among others, aiming to effectively mitigate the risks posed by GenAI-induced cyber threats.
Analysing Mechanisms for Virtual Channel Management in Low-Diameter networks
Authors: Alejandro Cano, Cristóbal Camarero, Carmen Martínez, Ramón Beivide
Subjects: Hardware Architecture (cs.AR); Distributed, Parallel, and Cluster Computing (cs.DC); Networking and Internet Architecture (cs.NI)
Abstract
To interconnect their growing number of servers, current supercomputers and data centers are starting to adopt low-diameter networks, such as HyperX, Dragonfly and Dragonfly+. These emergent topologies require balancing the load over their links and finding suitable non-minimal routing mechanisms for them becomes particularly challenging. The Valiant load balancing scheme is a very popular choice for non-minimal routing. Evolved adaptive routing mechanisms implemented in real systems are based on this Valiant scheme. All these low-diameter networks are deadlock-prone when non-minimal routing is employed. Routing deadlocks occur when packets cannot progress due to cyclic dependencies. Therefore, developing efficient deadlock-free packet routing mechanisms is critical for the progress of these emergent networks. The routing function includes the routing algorithm for path selection and the buffers management policy that dictates how packets allocate the buffers of the switches on their paths. For the same routing algorithm, a different buffer management mechanism can lead to a very different performance. Moreover, certain mechanisms considered efficient for avoiding deadlocks, may still suffer from hard to pinpoint instabilities that make erratic the network response. This paper focuses on exploring the impact of these buffers management policies on the performance of current interconnection networks, showing a 90\% of performance drop if an incorrect buffers management policy is used. Moreover, this study not only characterizes some of these undesirable scenarios but also proposes practicable solutions.
Keyword: quantization
An efficient and straightforward online quantization method for a data stream through remove-birth updating
Abstract
The growth of network-connected devices is creating an explosion of data, known as big data, and posing significant challenges to efficient data analysis. This data is generated continuously, creating a dynamic flow known as a data stream. The characteristics of a data stream may change dynamically, and this change is known as concept drift. Consequently, a method for handling data streams must efficiently reduce their volume while dynamically adapting to these changing characteristics. This paper proposes a simple online vector quantization method for concept drift. The proposed method identifies and replaces units with low win probability through remove-birth updating, thus achieving a rapid adaptation to concept drift. Furthermore, the results of this study show that the proposed method can generate minimal dead units even in the presence of concept drift. This study also suggests that some metrics calculated from the proposed method will be helpful for drift detection.
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
Authors: Yelysei Bondarenko, Markus Nagel, Tijmen Blankevoort
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Transformer models have been widely adopted in various domains over the last years, and especially large language models have advanced the field of AI significantly. Due to their size, the capability of these networks has increased tremendously, but this has come at the cost of a significant increase in necessary compute. Quantization is one of the most effective ways to reduce the computational time and memory consumption of neural networks. Many studies have shown, however, that modern transformer models tend to learn strong outliers in their activations, making them difficult to quantize. To retain acceptable performance, the existence of these outliers requires activations to be in higher bitwidth or the use of different numeric formats, extra fine-tuning, or other workarounds. We show that strong outliers are related to very specific behavior of attention heads that try to learn a "no-op" or just a partial update of the residual. To achieve the exact zeros needed in the attention matrix for a no-update, the input to the softmax is pushed to be larger and larger during training, causing outliers in other parts of the network. Based on these observations, we propose two simple (independent) modifications to the attention mechanism - clipped softmax and gated attention. We empirically show that models pre-trained using our methods learn significantly smaller outliers while maintaining and sometimes even improving the floating-point task performance. This enables us to quantize transformers to full INT8 quantization of the activations without any additional effort. We demonstrate the effectiveness of our methods on both language models (BERT, OPT) and vision transformers.
Keyword: efficient
MPSTAN: Metapopulation-based Spatio-Temporal Attention Network for Epidemic Forecasting
Pushing the Limits of Machine Design: Automated CPU Design with AI
Deep Dynamic Epidemiological Modelling for COVID-19 Forecasting in Multi-level Districts
Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free Inference
Polynomial Logical Zonotopes: A Set Representation for Reachability Analysis of Logical Systems
Auto-BI: Automatically Build BI-Models Leveraging Local Join Prediction and Global Schema Graph
FFCV: Accelerating Training by Removing Data Bottlenecks
NTT-Based Polynomial Modular Multiplication for Homomorphic Encryption: A Tutorial
Cloud-Native Architectural Characteristics and their Impacts on Software Quality: A Validation Survey
An efficient and straightforward online quantization method for a data stream through remove-birth updating
Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy
A Hierarchical Approach to exploiting Multiple Datasets from TalkBank
Constant Memory Attention Block
Solving time-dependent PDEs with the ultraspherical spectral method
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter
Probabilistic Slide-support Manipulation Planning in Clutter
High order entropy stable discontinuous Galerkin spectral element methods through subcell limiting
SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer
Constructing Colloquial Dataset for Persian Sentiment Analysis of Social Microblogs
Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation
OptIForest: Optimal Isolation Forest for Anomaly Detection
Generative Multimodal Entity Linking
On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling
Ladder Fine-tuning approach for SAM integrating complementary network
Exploring the Range of Possible Outcomes by means of Logical Scenario Analysis and Reduction for Testing Automated Driving Systems
Faster Compression of Deterministic Finite Automata
HypeRS: Building a Hypergraph-driven ensemble Recommender System
Solving the complete pseudo-impulsive radiation and diffraction problem using a spectral element method
Efficient Partitioning Method of Large-Scale Public Safety Spatio-Temporal Data based on Information Loss Constraints
Learning from Visual Observation via Offline Pretrained State-to-Go Transformer
Towards Exascale CFD Simulations Using the Discontinuous Galerkin Solver FLEXI
Map Point Selection for Visual SLAM
Multi-Objective Hull Form Optimization with CAD Engine-based Deep Learning Physics for 3D Flow Prediction
Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model
An Energy Stable Discontinuous Galerkin Time-Domain Finite Element Method in Optics and Photonics
Rate-Splitting Multiple Access for 6G Networks: Ten Promising Scenarios and Applications
Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping
Affine Correspondences between Multi-Camera Systems for Relative Pose Estimation
Sixth-Order Hybrid FDMs and/or the M-Matrix Property for Elliptic Interface Problems with Mixed Boundary Conditions
AugDMC: Data Augmentation Guided Deep Multiple Clustering
Transferable Curricula through Difficulty Conditioned Generators
Analysing Mechanisms for Virtual Channel Management in Low-Diameter networks
Deep Metric Learning with Soft Orthogonal Proxies
Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting
PromptIR: Prompting for All-in-One Blind Image Restoration
Keyword: faster
Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons
NoisyILRMA: Diffuse-Noise-Aware Independent Low-Rank Matrix Analysis for Fast Blind Source Extraction
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
Keyword: mobile
Slimmable Encoders for Flexible Split DNNs in Bandwidth and Resource Constrained IoT Systems
Accuracy evaluation of a Low-Cost Differential Global Positioning System for mobile robotics
A new 3-DOF 2T1R parallel mechanism: Topology design and kinematics
Minimalist and High-Quality Panoramic Imaging with PSF-aware Transformers
Investigating the Usability of Collaborative Robot control through Hands-Free Operation using Eye gaze and Augmented Reality
Keyword: pruning
Data-Free Backbone Fine-Tuning for Pruned Neural Networks
Keyword: diffusion
Semi-Implicit Denoising Diffusion Models (SIDDMs)
One at A Time: Multi-step Volumetric Probability Distribution Diffusion for Depth Estimation
DiffWA: Diffusion Models for Watermark Attack
On the Construction of Near-MDS Matrices
On the Direct Construction of MDS and Near-MDS Matrices
Towards More Realistic Membership Inference Attacks on Large Diffusion Models
Continuous Layout Editing of Single Images with Diffusion Models
Keyword: adaptive
MPSTAN: Metapopulation-based Spatio-Temporal Attention Network for Epidemic Forecasting
MultiTASC: A Multi-Tenancy-Aware Scheduler for Cascaded DNN Inference at the Consumer Edge
Reinforcement Federated Learning Method Based on Adaptive OPTICS Clustering
HOFA: Twitter Bot Detection with Homophily-Oriented Augmentation and Frequency Adaptive Attention
Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model
Adaptive Bernstein Change Detector for High-Dimensional Data Streams
Impacts and Risk of Generative AI Technology on Cyber Defense
Analysing Mechanisms for Virtual Channel Management in Low-Diameter networks
Keyword: quantization
An efficient and straightforward online quantization method for a data stream through remove-birth updating
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing