Abstract
Advanced mobility concepts such as Urban Air Mobility are emerging in full swing. In that concept, a safe and efficient aviation transportation system will use highly automated aircraft that will transport passengers or cargo at low altitudes within and between metropolitan regions. To accomplish these missions, new types of aircraft which are sometimes known as air taxis are being developed. A successful integration of these aircraft into existing airspace is complicated and needs to take into account various aspects. One of these is the risk of wildlife strikes which is predicted to be higher in case of air taxis. The proposed operational cruising altitude of air taxis is lower resulting in higher probability of collision as these are the altitudes where birds typically fly. Additionally, air taxis are smaller in size and have lower certification requirements compared to conventional aircraft. As a result, the severity of damaging bird strikes is higher. To assess the risk and formulate suitable regulations, an extensive analysis is required providing more quantitative insight into the bird strike challenge. Therefore, a theoretical model of bird strike to quantify the impact force by considering different bird and aircraft related parameters was developed previously. This paper aims to validate this theoretical model experimentally. It presents a methodology for implementing an experimental setup, allowing for the theoretical impact force model to be fully validated. A test matrix containing seven test cases, nine test scenarios and 135 iterations is formulated to conduct the bird strike experiment and the influencing parameters are considered for theoretical model verification. The paper closes with the presentation of the experimental results for validating the theoretical model which indicate 92.89 % conformance of experimental results with the theoretical model.
Estimating Treatment Effects Using Costly Simulation Samples from a Population-Scale Model of Opioid Use Disorder
Authors: Abdulrahman A. Ahmed, M. Amin Rahimian, Mark S. Roberts
Subjects: Multiagent Systems (cs.MA); Social and Information Networks (cs.SI); Applications (stat.AP)
Abstract
Large-scale models require substantial computational resources for analysis and studying treatment conditions. Specifically, estimating treatment effects using simulations may require a lot of infeasible resources to allocate at every treatment condition. Therefore, it is essential to develop efficient methods to allocate computational resources for estimating treatment effects. Agent-based simulation allows us to generate highly realistic simulation samples. FRED (A Framework for Reconstructing Epidemiological Dynamics) is an agent-based modeling system with a geospatial perspective using a synthetic population constructed based on the U.S. census data. Given its synthetic population, FRED simulations present a baseline for comparable results from different treatment conditions and treatment conditions. In this paper, we show three other methods for estimating treatment effects. In the first method, we resort to brute-force allocation, where all treatment conditions have an equal number of samples with a relatively large number of simulation runs. In the second method, we try to reduce the number of simulation runs by customizing individual samples required for each treatment effect based on the width of confidence intervals around the mean estimates. In the third method, we use a regression model, which allows us to learn across the treatment conditions such that simulation samples allocated for a treatment condition will help better estimate treatment effects in other conditions. We show that the regression-based methods result in a comparable estimate of treatment effects with less computational resources. The reduced variability and faster convergence of model-based estimates come at the cost of increased bias, and the bias-variance trade-off can be controlled by adjusting the number of model parameters (e.g., including higher-order interaction terms in the regression model).
Bayesian low-rank adaptation for large language models
Authors: Adam X. Yang, Maxime Robeyns, Xi Wang, Laurence Aitchison
Abstract
Parameter-efficient fine-tuning (PEFT) has emerged as a new paradigm for cost-efficient fine-tuning of large language models (LLMs), with low-rank adaptation (LoRA) being a widely adopted choice. However, fine-tuned LLMs often become overconfident especially on when fine-tuned on smaller datasets. Bayesian methods, with their inherent ability to estimate uncertainty, serve as potent tools to mitigate overconfidence and enhance calibration. In this work, we introduce Laplace-LoRA, a straightforward yet effective Bayesian method, which applies the Laplace approximation to the LoRA parameters and, considerably boosts the calibration of fine-tuned LLMs.
Business Metric-Aware Forecasting for Inventory Management
Abstract
Time-series forecasts play a critical role in business planning. However, forecasters typically optimize objectives that are agnostic to downstream business goals and thus can produce forecasts misaligned with business preferences. In this work, we demonstrate that optimization of conventional forecasting metrics can often lead to sub-optimal downstream business performance. Focusing on the inventory management setting, we derive an efficient procedure for computing and optimizing proxies of common downstream business metrics in an end-to-end differentiable manner. We explore a wide range of plausible cost trade-off scenarios, and empirically demonstrate that end-to-end optimization often outperforms optimization of standard business-agnostic forecasting metrics (by up to 45.7% for a simple scaling model, and up to 54.0% for an LSTM encoder-decoder model). Finally, we discuss how our findings could benefit other business contexts.
DebtViz: A Tool for Identifying, Measuring, Visualizing, and Monitoring Self-Admitted Technical Debt
Authors: Yikun Li, Mohamed Soliman, Paris Avgeriou, Maarten van Ittersum
Abstract
Technical debt, specifically Self-Admitted Technical Debt (SATD), remains a significant challenge for software developers and managers due to its potential to adversely affect long-term software maintainability. Although various approaches exist to identify SATD, tools for its comprehensive management are notably lacking. This paper presents DebtViz, an innovative SATD tool designed to automatically detect, classify, visualize and monitor various types of SATD in source code comments and issue tracking systems. DebtViz employs a Convolutional Neural Network-based approach for detection and a deconvolution technique for keyword extraction. The tool is structured into a back-end service for data collection and pre-processing, a SATD classifier for data categorization, and a front-end module for user interaction. DebtViz not only makes the management of SATD more efficient but also provides in-depth insights into the state of SATD within software systems, fostering informed decision-making on managing it. The scalability and deployability of DebtViz also make it a practical tool for both developers and managers in diverse software development environments. The source code of DebtViz is available at https://github.com/yikun-li/visdom-satd-management-system and the demo of DebtViz is at https://youtu.be/QXH6Bj0HQew.
Accelerating Continuous Integration with Parallel Batch Testing
Authors: Emad Fallahzadeh (1), Amir Hossein Bavand (1), Peter C. Rigby (1) ((1) Concordia University, Montreal, Quebec, Canada)
Abstract
Continuous integration at scale is costly but essential to software development. Various test optimization techniques including test selection and prioritization aim to reduce the cost. Test batching is an effective alternative, but overlooked technique. This study evaluates parallelization's effect by adjusting machine count for test batching and introduces two novel approaches. We establish TestAll as a baseline to study the impact of parallelism and machine count on feedback time. We re-evaluate ConstantBatching and introduce DynamicBatching, which adapts batch size based on the remaining changes in the queue. We also propose TestCaseBatching, enabling new builds to join a batch before full test execution, thus speeding up continuous integration. Our evaluations utilize Ericsson's results and 276 million test outcomes from open-source Chrome, assessing feedback time, execution reduction, and providing access to Chrome project scripts and data. The results reveal a non-linear impact of test parallelization on feedback time, as each test delay compounds across the entire test queue. ConstantBatching, with a batch size of 4, utilizes up to 72% fewer machines to maintain the actual average feedback time and provides a constant execution reduction of up to 75%. Similarly, DynamicBatching maintains the actual average feedback time with up to 91% fewer machines and exhibits variable execution reduction of up to 99%. TestCaseBatching holds the line of the actual average feedback time with up to 81% fewer machines and demonstrates variable execution reduction of up to 67%. We recommend practitioners use DynamicBatching and TestCaseBatching to reduce the required testing machines efficiently. Analyzing historical data to find the threshold where adding more machines has minimal impact on feedback time is also crucial for resource-effective testing.
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Abstract
Large language models (LLMs) have revolutionized natural language processing tasks. However, their practical deployment is hindered by their immense memory and computation requirements. Although recent post-training quantization (PTQ) methods are effective in reducing memory footprint and improving the computational efficiency of LLM, they hand-craft quantization parameters, which leads to low performance and fails to deal with extremely low-bit quantization. To tackle this issue, we introduce an Omnidirectionally calibrated Quantization (OmniQuant) technique for LLMs, which achieves good performance in diverse quantization settings while maintaining the computational efficiency of PTQ by efficiently optimizing various quantization parameters. OmniQuant comprises two innovative components including Learnable Weight Clipping (LWC) and Learnable Equivalent Transformation (LET). LWC modulates the extreme values of weights by optimizing the clipping threshold. Meanwhile, LET tackles activation outliers by shifting the challenge of quantization from activations to weights through a learnable equivalent transformation. Operating within a differentiable framework using block-wise error minimization, OmniQuant can optimize the quantization process efficiently for both weight-only and weight-activation quantization. For instance, the LLaMA-2 model family with the size of 7-70B can be processed with OmniQuant on a single A100-40G GPU within 1-16 hours using 128 samples. Extensive experiments validate OmniQuant's superior performance across diverse quantization configurations such as W4A4, W6A6, W4A16, W3A16, and W2A16. Additionally, OmniQuant demonstrates effectiveness in instruction-tuned models and delivers notable improvements in inference speed and memory reduction on real devices. Codes and models are available at \url{https://github.com/OpenGVLab/OmniQuant}.
MatchXML: An Efficient Text-label Matching Framework for Extreme Multi-label Text Classification
Authors: Hui Ye, Rajshekhar Sunderraman, Shihao Ji
Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG)
Abstract
The eXtreme Multi-label text Classification(XMC) refers to training a classifier that assigns a text sample with relevant labels from an extremely large-scale label set (e.g., millions of labels). We propose MatchXML, an efficient text-label matching framework for XMC. We observe that the label embeddings generated from the sparse Term Frequency-Inverse Document Frequency(TF-IDF) features have several limitations. We thus propose label2vec to effectively train the semantic dense label embeddings by the Skip-gram model. The dense label embeddings are then used to build a Hierarchical Label Tree by clustering. In fine-tuning the pre-trained encoder Transformer, we formulate the multi-label text classification as a text-label matching problem in a bipartite graph. We then extract the dense text representations from the fine-tuned Transformer. Besides the fine-tuned dense text embeddings, we also extract the static dense sentence embeddings from a pre-trained Sentence Transformer. Finally, a linear ranker is trained by utilizing the sparse TF-IDF features, the fine-tuned dense text representations and static dense sentence features. Experimental results demonstrate that MatchXML achieves state-of-the-art accuracy on five out of six datasets. As for the speed, MatchXML outperforms the competing methods on all the six datasets. Our source code is publicly available at https://github.com/huiyegit/MatchXML.
The time dimensional reduction method to determine the initial conditions without the knowledge of damping coefficients
Authors: Thuy T. Le, Linh V. Nguyen, Loc H. Nguyen, Hyunha Park
Abstract
This paper aims to reconstruct the initial condition of a hyperbolic equation with an unknown damping coefficient. Our approach involves approximating the hyperbolic equation's solution by its truncated Fourier expansion in the time domain and using a polynomial-exponential basis. This truncation process facilitates the elimination of the time variable, consequently, yielding a system of quasi-linear elliptic equations. To globally solve the system without needing an accurate initial guess, we employ the Carleman contraction principle. We provide several numerical examples to illustrate the efficacy of our method. The method not only delivers precise solutions but also showcases remarkable computational efficiency.
DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT
Abstract
Federated learning (FL) aims to collaboratively train a global model while ensuring client data privacy. However, FL faces challenges from the non-IID data distribution among clients. Clustered FL (CFL) has emerged as a promising solution, but most existing CFL frameworks adopt synchronous frameworks lacking asynchrony. An asynchronous CFL framework called SDAGFL based on directed acyclic graph distributed ledger techniques (DAG-DLT) was proposed, but its complete decentralization leads to high communication and storage costs. We propose DAG-ACFL, an asynchronous clustered FL framework based on directed acyclic graph distributed ledger techniques (DAG-DLT). We first detail the components of DAG-ACFL. A tip selection algorithm based on the cosine similarity of model parameters is then designed to aggregate models from clients with similar distributions. An adaptive tip selection algorithm leveraging change-point detection dynamically determines the number of selected tips. We evaluate the clustering and training performance of DAG-ACFL on multiple datasets and analyze its communication and storage costs. Experiments show the superiority of DAG-ACFL in asynchronous clustered FL. By combining DAG-DLT with clustered FL, DAG-ACFL realizes robust, decentralized and private model training with efficient performance.
Discovering Dichotomies for Problems in Database Theory
Abstract
Dichotomy theorems, which characterize the conditions under which a problem can be solved efficiently, have helped identify important tractability borders for as probabilistic query evaluation, view maintenance, query containment (among many more problems). However, dichotomy theorems for many such problems remain elusive under key settings such as bag semantics or for queries with self-joins. This work aims to unearth dichotomies for fundamental problems in reverse data management and knowledge representation. We use a novel approach to discovering dichotomies: instead of creating dedicated algorithms for easy (PTIME) and hard cases (NP-complete), we devise unified algorithms that are guaranteed to terminate in PTIME for easy cases. Using this approach, we discovered new tractable cases for the problem of minimal factorization of provenance formulas as well as dichotomies under bag semantics for the problems of resilience and causal responsibility
Using Adamic-Adar Index Algorithm to Predict Volunteer Collaboration: Less is More
Authors: Chao Wu, Peng Chen, Baiqiao Yin, Zijuan Lin, Chen Jiang, Di Yu, Changhong Zou, Chunwang Lui
Subjects: Social and Information Networks (cs.SI); Machine Learning (cs.LG)
Abstract
Social networks exhibit a complex graph-like structure due to the uncertainty surrounding potential collaborations among participants. Machine learning algorithms possess generic outstanding performance in multiple real-world prediction tasks. However, whether machine learning algorithms outperform specific algorithms designed for graph link prediction remains unknown to us. To address this issue, the Adamic-Adar Index (AAI), Jaccard Coefficient (JC) and common neighbour centrality (CNC) as representatives of graph-specific algorithms were applied to predict potential collaborations, utilizing data from volunteer activities during the Covid-19 pandemic in Shenzhen city, along with the classical machine learning algorithms such as random forest, support vector machine, and gradient boosting as single predictors and components of ensemble learning. This paper introduces that the AAI algorithm outperformed the traditional JC and CNC, and other machine learning algorithms in analyzing graph node attributes for this task.
Falcon: Accelerating Homomorphically Encrypted Convolutions for Efficient Private Mobile Network Inference
Abstract
Efficient networks, e.g., MobileNetV2, EfficientNet, etc, achieves state-of-the-art (SOTA) accuracy with lightweight computation. However, existing homomorphic encryption (HE)-based two-party computation (2PC) frameworks are not optimized for these networks and suffer from a high inference overhead. We observe the inefficiency mainly comes from the packing algorithm, which ignores the computation characteristics and the communication bottleneck of homomorphically encrypted depthwise convolutions. Therefore, in this paper, we propose Falcon, an effective dense packing algorithm for HE-based 2PC frameworks. Falcon features a zero-aware greedy packing algorithm and a communication-aware operator tiling strategy to improve the packing density for depthwise convolutions. Compared to SOTA HE-based 2PC frameworks, e.g., CrypTFlow2, Iron and Cheetah, Falcon achieves more than 15.6x, 5.1x and 1.8x latency reduction, respectively, at operator level. Meanwhile, at network level, Falcon allows for 1.4% and 4.2% accuracy improvement over Cheetah on CIFAR-100 and TinyImagenet datasets with iso-communication, respecitvely.
Self-supervised learning for hotspot detection and isolation from thermal images
Authors: Shreyas Goyal, Jagath C. Rajapakse
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Hotspot detection using thermal imaging has recently become essential in several industrial applications, such as security applications, health applications, and equipment monitoring applications. Hotspot detection is of utmost importance in industrial safety where equipment can develop anomalies. Hotspots are early indicators of such anomalies. We address the problem of hotspot detection in thermal images by proposing a self-supervised learning approach. Self-supervised learning has shown potential as a competitive alternative to their supervised learning counterparts but their application to thermography has been limited. This has been due to lack of diverse data availability, domain specific pre-trained models, standardized benchmarks, etc. We propose a self-supervised representation learning approach followed by fine-tuning that improves detection of hotspots by classification. The SimSiam network based ensemble classifier decides whether an image contains hotspots or not. Detection of hotspots is followed by precise hotspot isolation. By doing so, we are able to provide a highly accurate and precise hotspot identification, applicable to a wide range of applications. We created a novel large thermal image dataset to address the issue of paucity of easily accessible thermal images. Our experiments with the dataset created by us and a publicly available segmentation dataset show the potential of our approach for hotspot detection and its ability to isolate hotspots with high accuracy. We achieve a Dice Coefficient of 0.736, the highest when compared with existing hotspot identification techniques. Our experiments also show self-supervised learning as a strong contender of supervised learning, providing competitive metrics for hotspot detection, with the highest accuracy of our approach being 97%.
Design and Control of a Bio-inspired Wheeled Bipedal Robot
Abstract
Wheeled bipedal robots have the capability to execute agile and versatile locomotion tasks in unknown terrains, with balancing being a key criteria in evaluating their dynamic performance. This paper focuses on enhancing the balancing performance of wheeled bipedal robots through innovations in both hardware and software aspects. A bio-inspired mechanical design, inspired by the human barbell squat, is proposed and implemented to achieve an efficient distribution of load onto the limb joints. This design improves knee torque joint efficiency and facilitates control over the distribution of the center of mass (CoM). Meanwhile, a customized balance model, namely the wheeled linear inverted pendulum (wLIP), is developed. The wLIP surpasses other alternatives by providing a more accurate estimation of wheeled robot dynamics while ensuring balancing stability. Experimental results demonstrate that the robot is capable of maintaining balance while manipulating pelvis states and CoM velocity; furthermore, it exhibits robustness against external disturbances and unknown terrains.
LLM2KB: Constructing Knowledge Bases using instruction tuned context aware Large Language Models
Abstract
The advent of Large Language Models (LLM) has revolutionized the field of natural language processing, enabling significant progress in various applications. One key area of interest is the construction of Knowledge Bases (KB) using these powerful models. Knowledge bases serve as repositories of structured information, facilitating information retrieval and inference tasks. Our paper proposes LLM2KB, a system for constructing knowledge bases using large language models, with a focus on the Llama 2 architecture and the Wikipedia dataset. We perform parameter efficient instruction tuning for Llama-2-13b-chat and StableBeluga-13B by training small injection models that have only 0.05 % of the parameters of the base models using the Low Rank Adaptation (LoRA) technique. These injection models have been trained with prompts that are engineered to utilize Wikipedia page contexts of subject entities fetched using a Dense Passage Retrieval (DPR) algorithm, to answer relevant object entities for a given subject entity and relation. Our best performing model achieved an average F1 score of 0.6185 across 21 relations in the LM-KBC challenge held at the ISWC 2023 conference.
Predictive Network Configuration with Hierarchical Spectral Clustering for Software Defined Vehicles
Authors: Pierre Laclau, Stéphane Bonnet (Heudiasyc), Bertrand Ducourthial (Heudiasyc), Xiaoting Li, Trista Lin
Subjects: Networking and Internet Architecture (cs.NI)
Abstract
The increasing connectivity and autonomy of vehicles has led to a growing need for dynamic and real-time adjustments to software and network configurations. Software Defined Vehicles (SDV) have emerged as a potential solution to adapt to changing user needs with continuous updates and onboard reconfigurations to offer infotainment, connected, and background services such as cooperative driving. However, network configuration management in SDVs remains a significant challenge, particularly in the context of shared Ethernet-based in-vehicle networks. Traditional worst-case static configuration methods cannot efficiently allocate network resources while ensuring Quality of Service (QoS) guarantees for each network flow within the physical topology capabilities. In this work, we propose a configuration generation methodology that addresses these limitations by dynamically switching between pre-computed offboard configurations downloaded to the vehicle. Simulation results are presented and future work is discussed.
Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
Authors: Sruthi Gorantla, Eshaan Bhansali, Amit Deshpande, Anand Louis
Subjects: Machine Learning (cs.LG); Computers and Society (cs.CY); Information Retrieval (cs.IR)
Abstract
In learning-to-rank (LTR), optimizing only the relevance (or the expected ranking utility) can cause representational harm to certain categories of items. Moreover, if there is implicit bias in the relevance scores, LTR models may fail to optimize for true relevance. Previous works have proposed efficient algorithms to train stochastic ranking models that achieve fairness of exposure to the groups ex-ante (or, in expectation), which may not guarantee representation fairness to the groups ex-post, that is, after realizing a ranking from the stochastic ranking model. Typically, ex-post fairness is achieved by post-processing, but previous work does not train stochastic ranking models that are aware of this post-processing. In this paper, we propose a novel objective that maximizes expected relevance only over those rankings that satisfy given representation constraints to ensure ex-post fairness. Building upon recent work on an efficient sampler for ex-post group-fair rankings, we propose a group-fair Plackett-Luce model and show that it can be efficiently optimized for our objective in the LTR framework. Experiments on three real-world datasets show that our group-fair algorithm guarantees fairness alongside usually having better relevance compared to the LTR baselines. In addition, our algorithm also achieves better relevance than post-processing baselines, which also ensures ex-post fairness. Further, when implicit bias is injected into the training data, our algorithm typically outperforms existing LTR baselines in relevance.
Significant-attributed Community Search in Heterogeneous Information Networks
Abstract
Community search is a personalized community discovery problem aimed at finding densely-connected subgraphs containing the query vertex. In particular, the search for communities with high-importance vertices has recently received a great deal of attention. However, existing works mainly focus on conventional homogeneous networks where vertices are of the same type, but are not applicable to heterogeneous information networks (HINs) composed of multi-typed vertices and different semantic relations, such as bibliographic networks. In this paper, we study the problem of high-importance community search in HINs. A novel community model is introduced, named heterogeneous significant community (HSC), to unravel the closely connected vertices of the same type with high attribute values through multiple semantic relationships. An HSC not only maximizes the exploration of indirect relationships across entities of the anchor-type but incorporates their significance. To search the HSCs, we first develop online algorithms by exploiting both segmented-based meta-path expansion and significance increment. Specially, a solution space reuse strategy based on structural nesting is designed to boost the efficiency. In addition, we further devise a two-level index to support searching HSCs in optimal time, based on which a space-efficient compact index is proposed. Extensive experiments on real-world large-scale HINs demonstrate that our solutions are effective and efficient for searching HSCs, and the index-based algorithms are 2-4 orders of magnitude faster than online algorithms.
Model-free Reinforcement Learning with Stochastic Reward Stabilization for Recommender Systems
Abstract
Model-free RL-based recommender systems have recently received increasing research attention due to their capability to handle partial feedback and long-term rewards. However, most existing research has ignored a critical feature in recommender systems: one user's feedback on the same item at different times is random. The stochastic rewards property essentially differs from that in classic RL scenarios with deterministic rewards, which makes RL-based recommender systems much more challenging. In this paper, we first demonstrate in a simulator environment where using direct stochastic feedback results in a significant drop in performance. Then to handle the stochastic feedback more efficiently, we design two stochastic reward stabilization frameworks that replace the direct stochastic feedback with that learned by a supervised model. Both frameworks are model-agnostic, i.e., they can effectively utilize various supervised models. We demonstrate the superiority of the proposed frameworks over different RL-based recommendation baselines with extensive experiments on a recommendation simulator as well as an industrial-level recommender system.
A Bayesian Active Learning Approach to Comparative Judgement
Authors: Andy Gray, Alma Rahat, Tom Crick, Stephen Lindsay
Subjects: Machine Learning (cs.LG); Computers and Society (cs.CY); Information Retrieval (cs.IR)
Abstract
Assessment is a crucial part of education. Traditional marking is a source of inconsistencies and unconscious bias, placing a high cognitive load on the assessors. An approach to address these issues is comparative judgement (CJ). In CJ, the assessor is presented with a pair of items and is asked to select the better one. Following a series of comparisons, a rank is derived using a ranking model, for example, the BTM, based on the results. While CJ is considered a reliable method for marking, there are concerns around transparency, and the ideal number of pairwise comparisons to generate a reliable estimation of the rank order is not known. Additionally, there have been attempts to generate a method of selecting pairs that should be compared next in an informative manner, but some existing methods are known to have created their own bias within results inflating the reliability metric used. As a result, a random selection approach is usually deployed. We propose a novel Bayesian approach to CJ (BCJ) for determining the ranks of compared items alongside a new way to select the pairs to present to the marker(s) using active learning (AL), addressing the key shortcomings of traditional CJ. Furthermore, we demonstrate how the entire approach may provide transparency by providing the user insights into how it is making its decisions and, at the same time, being more efficient. Results from our experiments confirm that the proposed BCJ combined with entropy-driven AL pair-selection method is superior to other alternatives. We also find that the more comparisons done, the more accurate BCJ becomes, which solves the issue the current method has of the model deteriorating if too many comparisons are performed. As our approach can generate the complete predicted rank distribution for an item, we also show how this can be utilised in devising a predicted grade, guided by the assessor.
Learning Compact Neural Networks with Deep Overparameterised Multitask Learning
Abstract
Compact neural network offers many benefits for real-world applications. However, it is usually challenging to train the compact neural networks with small parameter sizes and low computational costs to achieve the same or better model performance compared to more complex and powerful architecture. This is particularly true for multitask learning, with different tasks competing for resources. We present a simple, efficient and effective multitask learning overparameterisation neural network design by overparameterising the model architecture in training and sharing the overparameterised model parameters more effectively across tasks, for better optimisation and generalisation. Experiments on two challenging multitask datasets (NYUv2 and COCO) demonstrate the effectiveness of the proposed method across various convolutional networks and parameter sizes.
A Study on Hyperparameters Configurations for an Efficient Human Activity Recognition System
Authors: Paulo J.S. Ferreira, João Mendes Moreira, João M.P. Cardoso
Abstract
Human Activity Recognition (HAR) has been a popular research field due to the widespread of devices with sensors and computational power (e.g., smartphones and smartwatches). Applications for HAR systems have been extensively researched in recent literature, mainly due to the benefits of improving quality of life in areas like health and fitness monitoring. However, since persons have different motion patterns when performing physical activities, a HAR system must adapt to user characteristics to maintain or improve accuracy. Mobile devices, such as smartphones, used to implement HAR systems, have limited resources (e.g., battery life). They also have difficulty adapting to the device's constraints to work efficiently for long periods. In this work, we present a kNN-based HAR system and an extensive study of the influence of hyperparameters (window size, overlap, distance function, and the value of k) and parameters (sampling frequency) on the system accuracy, energy consumption, and inference time. We also study how hyperparameter configurations affect the model's user and activity performance. Experimental results show that adapting the hyperparameters makes it possible to adjust the system's behavior to the user, the device, and the target service. These results motivate the development of a HAR system capable of automatically adapting the hyperparameters for the user, the device, and the service.
iCub Detecting Gazed Objects: A Pipeline Estimating Human Attention
Authors: Shiva Hanifi, Elisa Maiettini, Maria Lombardi, Lorenzo Natale
Abstract
This paper explores the role of eye gaze in human-robot interactions and proposes a novel system for detecting objects gazed by the human using solely visual feedback. The system leverages on face detection, human attention prediction, and online object detection, and it allows the robot to perceive and interpret human gaze accurately, paving the way for establishing joint attention with human partners. Additionally, a novel dataset collected with the humanoid robot iCub is introduced, comprising over 22,000 images from ten participants gazing at different annotated objects. This dataset serves as a benchmark for evaluating the performance of the proposed pipeline. The paper also includes an experimental analysis of the pipeline's effectiveness in a human-robot interaction setting, examining the performance of each component. Furthermore, the developed system is deployed on the humanoid robot iCub, and a supplementary video showcases its functionality. The results demonstrate the potential of the proposed approach to enhance social awareness and responsiveness in social robotics, as well as improve assistance and support in collaborative scenarios, promoting efficient human-robot collaboration. The code and the collected dataset will be released upon acceptance.
Abstract
LiDAR-based semantic perception tasks are critical yet challenging for autonomous driving. Due to the motion of objects and static/dynamic occlusion, temporal information plays an essential role in reinforcing perception by enhancing and completing single-frame knowledge. Previous approaches either directly stack historical frames to the current frame or build a 4D spatio-temporal neighborhood using KNN, which duplicates computation and hinders realtime performance. Based on our observation that stacking all the historical points would damage performance due to a large amount of redundant and misleading information, we propose the Sparse Voxel-Adjacent Query Network (SVQNet) for 4D LiDAR semantic segmentation. To take full advantage of the historical frames high-efficiently, we shunt the historical points into two groups with reference to the current points. One is the Voxel-Adjacent Neighborhood carrying local enhancing knowledge. The other is the Historical Context completing the global knowledge. Then we propose new modules to select and extract the instructive features from the two groups. Our SVQNet achieves state-of-the-art performance in LiDAR semantic segmentation of the SemanticKITTI benchmark and the nuScenes dataset.
ConSlide: Asynchronous Hierarchical Interaction Transformer with Breakup-Reorganize Rehearsal for Continual Whole Slide Image Analysis
Abstract
Whole slide image (WSI) analysis has become increasingly important in the medical imaging community, enabling automated and objective diagnosis, prognosis, and therapeutic-response prediction. However, in clinical practice, the ever-evolving environment hamper the utility of WSI analysis models. In this paper, we propose the FIRST continual learning framework for WSI analysis, named ConSlide, to tackle the challenges of enormous image size, utilization of hierarchical structure, and catastrophic forgetting by progressive model updating on multiple sequential datasets. Our framework contains three key components. The Hierarchical Interaction Transformer (HIT) is proposed to model and utilize the hierarchical structural knowledge of WSI. The Breakup-Reorganize (BuRo) rehearsal method is developed for WSI data replay with efficient region storing buffer and WSI reorganizing operation. The asynchronous updating mechanism is devised to encourage the network to learn generic and specific knowledge respectively during the replay stage, based on a nested cross-scale similarity learning (CSSL) module. We evaluated the proposed ConSlide on four public WSI datasets from TCGA projects. It performs best over other state-of-the-art methods with a fair WSI-based continual learning setting and achieves a better trade-off of the overall performance and forgetting on previous task
Assessing Keyness using Permutation Tests
Authors: Thoralf Mildenberger
Subjects: Computation and Language (cs.CL); Applications (stat.AP)
Abstract
We propose a resampling-based approach for assessing keyness in corpus linguistics based on suggestions by Gries (2006, 2022). Traditional approaches based on hypothesis tests (e.g. Likelihood Ratio) model the copora as independent identically distributed samples of tokens. This model does not account for the often observed uneven distribution of occurences of a word across a corpus. When occurences of a word are concentrated in few documents, large values of LLR and similar scores are in fact much more likely than accounted for by the token-by-token sampling model, leading to false positives. We replace the token-by-token sampling model by a model where corpora are samples of documents rather than tokens, which is much closer to the way corpora are actually assembled. We then use a permutation approach to approximate the distribution of a given keyness score under the null hypothesis of equal frequencies and obtain p-values for assessing significance. We do not need any assumption on how the tokens are organized within or across documents, and the approach works with basically any keyness score. Hence, appart from obtaining more accurate p-values for scores like LLR, we can also assess significance for e.g. the logratio which has been proposed as a measure of effect size. An efficient implementation of the proposed approach is provided in the R package keyperm available from github.
SoTaNa: The Open-Source Software Development Assistant
Authors: Ensheng Shi, Fengji Zhang, Yanlin Wang, Bei Chen, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, Hongbin Sun
Abstract
Software development plays a crucial role in driving innovation and efficiency across modern societies. To meet the demands of this dynamic field, there is a growing need for an effective software development assistant. However, existing large language models represented by ChatGPT suffer from limited accessibility, including training data and model weights. Although other large open-source models like LLaMA have shown promise, they still struggle with understanding human intent. In this paper, we present SoTaNa, an open-source software development assistant. SoTaNa utilizes ChatGPT to generate high-quality instruction-based data for the domain of software engineering and employs a parameter-efficient fine-tuning approach to enhance the open-source foundation model, LLaMA. We evaluate the effectiveness of \our{} in answering Stack Overflow questions and demonstrate its capabilities. Additionally, we discuss its capabilities in code summarization and generation, as well as the impact of varying the volume of generated data on model performance. Notably, SoTaNa can run on a single GPU, making it accessible to a broader range of researchers. Our code, model weights, and data are public at \url{https://github.com/DeepSoftwareAnalytics/SoTaNa}.
Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and Research Opportunities
Authors: Yanjie Song, Yutong Wu, Yangyang Guo, Ran Yan, P. N. Suganthan, Yue Zhang, Witold Pedrycz, Yingwu Chen, Swagatam Das, Rammohan Mallipeddi, Oladayo Solomon Ajani
Subjects: Neural and Evolutionary Computing (cs.NE)
Abstract
Evolutionary algorithms (EA), a class of stochastic search algorithms based on the principles of natural evolution, have received widespread acclaim for their exceptional performance in various optimization problems. While researchers worldwide have proposed a wide variety of EAs, certain limitations remain, such as slow convergence speed and poor generalization capabilities. Consequently, numerous scholars are actively exploring improvements to algorithmic structures, operators, search patterns, etc., to enhance their optimization performance. Reinforcement learning (RL) integrated as a component in the EA framework has demonstrated superior performance in recent years. This paper presents a comprehensive survey on the integration of reinforcement learning into the evolutionary algorithm, referred to as reinforcement learning-assisted evolutionary algorithm (RL-EA). Firstly, we introduce reinforcement learning and the evolutionary algorithm. We then provide a taxonomy of RL-EA. We then discuss the RL-EA integration method, the RL-assisted strategy adopted by RL-EA, and its applications according to the existing literature. The RL-assisted strategy is divided according to the implemented functions including the solution generation, learnable objective function, algorithm/operator/sub-population selection, parameter adaptation, and other strategies. Subsequently, other attribute settings of RL in RL-EA are discussed. Finally, we analyze potential directions for future research. This paper serves as a comprehensive resource for researchers who are interested in RL-EA as it provides an overview of the current state-of-the-art and highlights the associated challenges. By leveraging this survey, readers can swiftly gain insights into RL-EA to develop efficient algorithms, thereby fostering further advancements in this emerging field.
Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models
Authors: Chi Chen, Ruoyu Qin, Fuwen Luo, Xiaoyue Mi, Peng Li, Maosong Sun, Yang Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recently, Multimodal Large Language Models (MLLMs) that enable Large Language Models (LLMs) to interpret images through visual instruction tuning have achieved significant success. However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities, lacking a more fine-grained cross-modal alignment. In this paper, we propose Position-enhanced Visual Instruction Tuning (PVIT), which extends the functionality of MLLMs by integrating an additional region-level vision encoder. This integration promotes a more detailed comprehension of images for the MLLM. In addition, to efficiently achieve a fine-grained alignment between the vision modules and the LLM, we design multiple data generation strategies to construct an image-region-language instruction dataset. Finally, we present both quantitative experiments and qualitative analysis that demonstrate the superiority of the proposed model. Code and data will be released at https://github.com/THUNLP-MT/PVIT.
Unlocking Fine-Grained Details with Wavelet-based High-Frequency Enhancement in Transformers
Abstract
Medical image segmentation is a critical task that plays a vital role in diagnosis, treatment planning, and disease monitoring. Accurate segmentation of anatomical structures and abnormalities from medical images can aid in the early detection and treatment of various diseases. In this paper, we address the local feature deficiency of the Transformer model by carefully re-designing the self-attention map to produce accurate dense prediction in medical images. To this end, we first apply the wavelet transformation to decompose the input feature map into low-frequency (LF) and high-frequency (HF) subbands. The LF segment is associated with coarse-grained features while the HF components preserve fine-grained features such as texture and edge information. Next, we reformulate the self-attention operation using the efficient Transformer to perform both spatial and context attention on top of the frequency representation. Furthermore, to intensify the importance of the boundary information, we impose an additional attention map by creating a Gaussian pyramid on top of the HF components. Moreover, we propose a multi-scale context enhancement block within skip connections to adaptively model inter-scale dependencies to overcome the semantic gap among stages of the encoder and decoder modules. Throughout comprehensive experiments, we demonstrate the effectiveness of our strategy on multi-organ and skin lesion segmentation benchmarks. The implementation code will be available upon acceptance. \href{https://github.com/mindflow-institue/WaveFormer}{GitHub}.
Learning How to Price Charging in Electric Ride-Hailing Markets
Authors: Marko Maljkovic, Gustav Nilsson, Nikolas Geroliminis
Abstract
With the electrification of ride-hailing fleets, there will be a need to incentivize where and when the ride-hailing vehicles should charge. In this work, we assume that a central authority wants to control the distribution of the vehicles and can do so by selecting charging prices. Since there will likely be more than one ride-hailing company in the market, we model the problem as a single-leader multiple-follower Stackelberg game. The followers, i.e., the companies, compete about the charging resources under given prices provided by the leader. We present a learning algorithm based on the concept of contextual bandits that allows the central authority to find an efficient pricing strategy. We also show how the exploratory phase of the learning can be improved if the leader has some partial knowledge about the companies' objective functions. The efficiency of the proposed algorithm is demonstrated in a simulated case study for the city of Shenzhen, China.
Stand-alone Multigrid for Helmholtz Revisited: Towards Convergence Using Standard Components
Abstract
Getting standard multigrid to work efficiently for the high-frequency Helmholtz equation has been an open problem in applied mathematics for years. Much effort has been dedicated to finding solution methods which can use multigrid components to obtain solvers with a linear time complexity. In this work we present one among the first stand-alone multigrid solvers for the 2D Helmholtz equation using both a constant and non-constant wavenumber model problem. We use standard smoothing techniques and do not impose any restrictions on the number of grid points per wavelength on the coarse-grid. As a result we are able to obtain a full V- and W-cycle algorithm. The key features of the algorithm are the use of higher-order inter-grid transfer operators combined with a complex constant in the coarsening process. Using weighted-Jacobi smoothing, we obtain a solver which is $h-$independent and scales linearly with the wavenumber $k$. Numerical results using 1 to 5 GMRES(3) smoothing steps approach $k-$ and $h-$ independent convergence, when combined with the higher-order inter-grid transfer operators and a small or even zero complex shift. The proposed algorithm provides an important step towards the perpetuating branch of research in finding scalable solvers for challenging wave propagation problems.
Multi-Focus Querying of the Human Genome Information on Desktop and in Virtual Reality: an Evaluation
Abstract
The human genome is incredibly information-rich, consisting of approximately 25,000 protein-coding genes spread out over 3.2 billion nucleotide base pairs contained within 24 unique chromosomes. The genome is important in maintaining spatial context, which assists in understanding gene interactions and relationships. However, existing methods of genome visualization that utilize spatial awareness are inefficient and prone to limitations in presenting gene information and spatial context. This study proposed an innovative approach to genome visualization and exploration utilizing virtual reality. To determine the optimal placement of gene information and evaluate its essentiality in a VR environment, we implemented and conducted a user study with three different interaction methods. Two interaction methods were developed in virtual reality to determine if gene information is better suited to be embedded within the chromosome ideogram or separate from the ideogram. The final ideogram interaction method was performed on a desktop and served as a benchmark to evaluate the potential benefits associated with the use of VR. Our study findings reveal a preference for VR, despite longer task completion times. In addition, the placement of gene information within the visualization had a notable impact on the ability of a user to complete tasks. Specifically, gene information embedded within the chromosome ideogram was better suited for single target identification and summarization tasks, while separating gene information from the ideogram better supported region comparison tasks.
TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs
Abstract
Precise hardware performance models play a crucial role in code optimizations. They can assist compilers in making heuristic decisions or aid autotuners in identifying the optimal configuration for a given program. For example, the autotuner for XLA, a machine learning compiler, discovered 10-20% speedup on state-of-the-art models serving substantial production traffic at Google. Although there exist a few datasets for program performance prediction, they target small sub-programs such as basic blocks or kernels. This paper introduces TpuGraphs, a performance prediction dataset on full tensor programs, represented as computational graphs, running on Tensor Processing Units (TPUs). Each graph in the dataset represents the main computation of a machine learning workload, e.g., a training epoch or an inference step. Each data sample contains a computational graph, a compilation configuration, and the execution time of the graph when compiled with the configuration. The graphs in the dataset are collected from open-source machine learning programs, featuring popular model architectures, e.g., ResNet, EfficientNet, Mask R-CNN, and Transformer. TpuGraphs provides 25x more graphs than the largest graph property prediction dataset (with comparable graph sizes), and 770x larger graphs on average compared to existing performance prediction datasets on machine learning programs. This graph-level prediction task on large graphs introduces new challenges in learning, ranging from scalability, training efficiency, to model quality.
Ultrafast-and-Ultralight ConvNet-Based Intelligent Monitoring System for Diagnosing Early-Stage Mpox Anytime and Anywhere
Authors: Yubiao Yue, Xiaoqiang Shi, Li Qin, Xinyue Zhang, Yanmei Chen, Jialong Xu, Zipei Zheng, Yujun Cao, Di Liu, Zhenzhang Li, Yang Li
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Due to the lack of more efficient diagnostic tools for monkeypox, its spread remains unchecked, presenting a formidable challenge to global health. While the high efficacy of deep learning models for monkeypox diagnosis has been demonstrated in related studies, the overlook of inference speed, the parameter size and diagnosis performance for early-stage monkeypox renders the models inapplicable in real-world settings. To address these challenges, we proposed an ultrafast and ultralight network named Fast-MpoxNet. Fast-MpoxNet possesses only 0.27M parameters and can process input images at 68 frames per second (FPS) on the CPU. To counteract the diagnostic performance limitation brought about by the small model capacity, it integrates the attention-based feature fusion module and the multiple auxiliary losses enhancement strategy for better detecting subtle image changes and optimizing weights. Using transfer learning and five-fold cross-validation, Fast-MpoxNet achieves 94.26% Accuracy on the Mpox dataset. Notably, its recall for early-stage monkeypox achieves 93.65%. By adopting data augmentation, our model's Accuracy rises to 98.40% and attains a Practicality Score (A new metric for measuring model practicality in real-time diagnosis application) of 0.80. We also developed an application system named Mpox-AISM V2 for both personal computers and mobile phones. Mpox-AISM V2 features ultrafast responses, offline functionality, and easy deployment, enabling accurate and real-time diagnosis for both the public and individuals in various real-world settings, especially in populous settings during the outbreak. Our work could potentially mitigate future monkeypox outbreak and illuminate a fresh paradigm for developing real-time diagnostic tools in the healthcare field.
A Poisson-Based Approximation Algorithm for Stochastic Bin Packing of Bernoulli Items
Authors: Tomasz Kanas, Krzysztof Rzadca
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
A cloud scheduler packs tasks onto machines with contradictory goals of (1) using the machines as efficiently as possible while (2) avoiding overloading that might result in CPU throttling or out-of-memory errors. We take a stochastic approach that models the uncertainty of tasks' resource requirements by random variables. We focus on a little-explored case of items, each having a Bernoulli distribution that corresponds to tasks that are either idle or need a certain CPU share. RPAP, our online approximation algorithm, upper-bounds a subset of items by Poisson distributions. Unlike existing algorithms for Bernoulli items that prove the approximation ratio only up to a multiplicative constant, we provide a closed-form expression. We derive RPAPC, a combined approach having the same theoretical guarantees as RPAP. In simulations, RPAPC's results are close to FFR, a greedy heuristic with no worst-case guarantees; RPAPC slightly outperforms FFR on datasets with small items.
Keyword: faster
Estimating Treatment Effects Using Costly Simulation Samples from a Population-Scale Model of Opioid Use Disorder
Authors: Abdulrahman A. Ahmed, M. Amin Rahimian, Mark S. Roberts
Subjects: Multiagent Systems (cs.MA); Social and Information Networks (cs.SI); Applications (stat.AP)
Abstract
Large-scale models require substantial computational resources for analysis and studying treatment conditions. Specifically, estimating treatment effects using simulations may require a lot of infeasible resources to allocate at every treatment condition. Therefore, it is essential to develop efficient methods to allocate computational resources for estimating treatment effects. Agent-based simulation allows us to generate highly realistic simulation samples. FRED (A Framework for Reconstructing Epidemiological Dynamics) is an agent-based modeling system with a geospatial perspective using a synthetic population constructed based on the U.S. census data. Given its synthetic population, FRED simulations present a baseline for comparable results from different treatment conditions and treatment conditions. In this paper, we show three other methods for estimating treatment effects. In the first method, we resort to brute-force allocation, where all treatment conditions have an equal number of samples with a relatively large number of simulation runs. In the second method, we try to reduce the number of simulation runs by customizing individual samples required for each treatment effect based on the width of confidence intervals around the mean estimates. In the third method, we use a regression model, which allows us to learn across the treatment conditions such that simulation samples allocated for a treatment condition will help better estimate treatment effects in other conditions. We show that the regression-based methods result in a comparable estimate of treatment effects with less computational resources. The reduced variability and faster convergence of model-based estimates come at the cost of increased bias, and the bias-variance trade-off can be controlled by adjusting the number of model parameters (e.g., including higher-order interaction terms in the regression model).
Significant-attributed Community Search in Heterogeneous Information Networks
Abstract
Community search is a personalized community discovery problem aimed at finding densely-connected subgraphs containing the query vertex. In particular, the search for communities with high-importance vertices has recently received a great deal of attention. However, existing works mainly focus on conventional homogeneous networks where vertices are of the same type, but are not applicable to heterogeneous information networks (HINs) composed of multi-typed vertices and different semantic relations, such as bibliographic networks. In this paper, we study the problem of high-importance community search in HINs. A novel community model is introduced, named heterogeneous significant community (HSC), to unravel the closely connected vertices of the same type with high attribute values through multiple semantic relationships. An HSC not only maximizes the exploration of indirect relationships across entities of the anchor-type but incorporates their significance. To search the HSCs, we first develop online algorithms by exploiting both segmented-based meta-path expansion and significance increment. Specially, a solution space reuse strategy based on structural nesting is designed to boost the efficiency. In addition, we further devise a two-level index to support searching HSCs in optimal time, based on which a space-efficient compact index is proposed. Extensive experiments on real-world large-scale HINs demonstrate that our solutions are effective and efficient for searching HSCs, and the index-based algorithms are 2-4 orders of magnitude faster than online algorithms.
Training normalizing flows with computationally intensive target probability distributions
Authors: Piotr Bialas, Piotr Korcyl, Tomasz Stebel
Subjects: Machine Learning (cs.LG); Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Lattice (hep-lat)
Abstract
Machine learning techniques, in particular the so-called normalizing flows, are becoming increasingly popular in the context of Monte Carlo simulations as they can effectively approximate target probability distributions. In the case of lattice field theories (LFT) the target distribution is given by the exponential of the action. The common loss function's gradient estimator based on the "reparametrization trick" requires the calculation of the derivative of the action with respect to the fields. This can present a significant computational cost for complicated, non-local actions like e.g. fermionic action in QCD. In this contribution, we propose an estimator for normalizing flows based on the REINFORCE algorithm that avoids this issue. We apply it to two dimensional Schwinger model with Wilson fermions at criticality and show that it is up to ten times faster in terms of the wall-clock time as well as requiring up to $30\%$ less memory than the reparameterization trick estimator. It is also more numerically stable allowing for single precision calculations and the use of half-float tensor cores. We present an in-depth analysis of the origins of those improvements. We believe that these benefits will appear also outside the realm of the LFT, in each case where the target probability distribution is computationally intensive.
Escaping the Sample Trap: Fast and Accurate Epistemic Uncertainty Estimation with Pairwise-Distance Estimators
Abstract
This work introduces a novel approach for epistemic uncertainty estimation for ensemble models using pairwise-distance estimators (PaiDEs). These estimators utilize the pairwise-distance between model components to establish bounds on entropy and uses said bounds as estimates for information-based criterion. Unlike recent deep learning methods for epistemic uncertainty estimation, which rely on sample-based Monte Carlo estimators, PaiDEs are able to estimate epistemic uncertainty up to 100$\times$ faster, over a larger space (up to 100$\times$) and perform more accurately in higher dimensions. To validate our approach, we conducted a series of experiments commonly used to evaluate epistemic uncertainty estimation: 1D sinusoidal data, Pendulum-v0, Hopper-v2, Ant-v2 and Humanoid-v2. For each experimental setting, an Active Learning framework was applied to demonstrate the advantages of PaiDEs for epistemic uncertainty estimation.
Keyword: mobile
Data-driven Storytelling in Hybrid Immersive Display Environments
Authors: Xiaoyan Zhou, Yalong Yang, Francisco Ortega, Anil Ufuk Batmaz, Benjamin Lee
Abstract
Data-driven stories seek to inform and persuade audiences through the use of data visualisations and engaging narratives. These stories have now been highly optimised to be viewed on desktop and mobile computers. In contrast, while immersive virtual and augmented reality (VR/AR) technologies have been shown to be more persuasive, no clear standard has yet emerged for such immersive stories. With this in mind, we propose that a hybrid data-driven storytelling approach can leverage the familiarity of 2D display devices with the immersiveness and presence afforded by VR/AR headsets. In this position paper, we characterise hybrid data-driven stories by describing its design opportunities, considerations, and challenges. In particular, we describe how both 2D and 3D display environments can play either complementary or symbiotic roles with each other for the purposes of storytelling. We hope that this work inspires researchers to investigate how hybrid user interfaces may be used for storytelling.
Falcon: Accelerating Homomorphically Encrypted Convolutions for Efficient Private Mobile Network Inference
Abstract
Efficient networks, e.g., MobileNetV2, EfficientNet, etc, achieves state-of-the-art (SOTA) accuracy with lightweight computation. However, existing homomorphic encryption (HE)-based two-party computation (2PC) frameworks are not optimized for these networks and suffer from a high inference overhead. We observe the inefficiency mainly comes from the packing algorithm, which ignores the computation characteristics and the communication bottleneck of homomorphically encrypted depthwise convolutions. Therefore, in this paper, we propose Falcon, an effective dense packing algorithm for HE-based 2PC frameworks. Falcon features a zero-aware greedy packing algorithm and a communication-aware operator tiling strategy to improve the packing density for depthwise convolutions. Compared to SOTA HE-based 2PC frameworks, e.g., CrypTFlow2, Iron and Cheetah, Falcon achieves more than 15.6x, 5.1x and 1.8x latency reduction, respectively, at operator level. Meanwhile, at network level, Falcon allows for 1.4% and 4.2% accuracy improvement over Cheetah on CIFAR-100 and TinyImagenet datasets with iso-communication, respecitvely.
On Incentivizing Social Information Sharing in Routing Games
Authors: Songhua Li, Lingjie Duan
Subjects: Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA)
Abstract
We study a new incentive problem of social information sharing for location-based services (e.g., Waze and Yelp). The problem aims to crowdsource a mass of mobile users to learn massive point-of-interest (PoI) information while traveling and share it with each other as a public good. Given that crowdsourced users mind their own travel costs and possess various preferences over the PoI information along different paths, we formulate the problem as a non-atomic routing game with positive network externalities. We first show by price of anarchy (PoA) analysis that, in the absence of any incentive design, users' selfish routing on the path with the lowest cost will limit information diversity and lead to an arbitrarily large efficiency loss from the social optimum. This motivates us to explore effective incentive mechanisms to remedy while upholding individual rationality, incentive compatibility, and budget balance to ensure practical feasibility. We start by presenting an adaptive information restriction (AIR) mechanism that dynamically customizes restriction fractions, depending on the real user flows along different paths, to govern users' access to the shared PoI aggregation. We show that AIR achieves a PoA of 0.25 for homogeneous users (of identical PoI preferences over paths) and 0.125 for heterogeneous users in a typical network of two parallel paths. Further, we propose a side-payment mechanism (ASP) that adaptively charges or rewards users along certain paths. With those charges and rewards well-tailored, ASP significantly improves the PoA to 1 (optimal) and 0.5 for homogeneous and heterogeneous users in the two-path network, respectively. For a generalized network of multiple parallel paths, we further advance ASP to be able to guarantee a PoA of 0.5. Additionally, our theoretical results are well corroborated by our numerical findings.
A Study on Hyperparameters Configurations for an Efficient Human Activity Recognition System
Authors: Paulo J.S. Ferreira, João Mendes Moreira, João M.P. Cardoso
Abstract
Human Activity Recognition (HAR) has been a popular research field due to the widespread of devices with sensors and computational power (e.g., smartphones and smartwatches). Applications for HAR systems have been extensively researched in recent literature, mainly due to the benefits of improving quality of life in areas like health and fitness monitoring. However, since persons have different motion patterns when performing physical activities, a HAR system must adapt to user characteristics to maintain or improve accuracy. Mobile devices, such as smartphones, used to implement HAR systems, have limited resources (e.g., battery life). They also have difficulty adapting to the device's constraints to work efficiently for long periods. In this work, we present a kNN-based HAR system and an extensive study of the influence of hyperparameters (window size, overlap, distance function, and the value of k) and parameters (sampling frequency) on the system accuracy, energy consumption, and inference time. We also study how hyperparameter configurations affect the model's user and activity performance. Experimental results show that adapting the hyperparameters makes it possible to adjust the system's behavior to the user, the device, and the target service. These results motivate the development of a HAR system capable of automatically adapting the hyperparameters for the user, the device, and the service.
Ultrafast-and-Ultralight ConvNet-Based Intelligent Monitoring System for Diagnosing Early-Stage Mpox Anytime and Anywhere
Authors: Yubiao Yue, Xiaoqiang Shi, Li Qin, Xinyue Zhang, Yanmei Chen, Jialong Xu, Zipei Zheng, Yujun Cao, Di Liu, Zhenzhang Li, Yang Li
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Due to the lack of more efficient diagnostic tools for monkeypox, its spread remains unchecked, presenting a formidable challenge to global health. While the high efficacy of deep learning models for monkeypox diagnosis has been demonstrated in related studies, the overlook of inference speed, the parameter size and diagnosis performance for early-stage monkeypox renders the models inapplicable in real-world settings. To address these challenges, we proposed an ultrafast and ultralight network named Fast-MpoxNet. Fast-MpoxNet possesses only 0.27M parameters and can process input images at 68 frames per second (FPS) on the CPU. To counteract the diagnostic performance limitation brought about by the small model capacity, it integrates the attention-based feature fusion module and the multiple auxiliary losses enhancement strategy for better detecting subtle image changes and optimizing weights. Using transfer learning and five-fold cross-validation, Fast-MpoxNet achieves 94.26% Accuracy on the Mpox dataset. Notably, its recall for early-stage monkeypox achieves 93.65%. By adopting data augmentation, our model's Accuracy rises to 98.40% and attains a Practicality Score (A new metric for measuring model practicality in real-time diagnosis application) of 0.80. We also developed an application system named Mpox-AISM V2 for both personal computers and mobile phones. Mpox-AISM V2 features ultrafast responses, offline functionality, and easy deployment, enabling accurate and real-time diagnosis for both the public and individuals in various real-world settings, especially in populous settings during the outbreak. Our work could potentially mitigate future monkeypox outbreak and illuminate a fresh paradigm for developing real-time diagnostic tools in the healthcare field.
Open Gaze: An Open-Source Implementation Replicating Google's Eye Tracking Paper
Abstract
Eye tracking has been a pivotal tool in diverse fields such as vision research, language analysis, and usability assessment. The majority of prior investigations, however, have concentrated on expansive desktop displays employing specialized, costly eye tracking hardware that lacks scalability. Remarkably little insight exists into ocular movement patterns on smartphones, despite their widespread adoption and significant usage. In this manuscript, we present an open-source implementation of a smartphone-based gaze tracker that emulates the methodology proposed by a GooglePaper (whose source code remains proprietary). Our focus is on attaining accuracy comparable to that attained through the GooglePaper's methodology, without the necessity for supplementary hardware. Through the integration of machine learning techniques, we unveil an accurate eye tracking solution that is native to smartphones. Our approach demonstrates precision akin to the state-of-the-art mobile eye trackers, which are characterized by a cost that is two orders of magnitude higher. Leveraging the vast MIT GazeCapture dataset, which is available through registration on the dataset's website, we successfully replicate crucial findings from previous studies concerning ocular motion behavior in oculomotor tasks and saliency analyses during natural image observation. Furthermore, we emphasize the applicability of smartphone-based gaze tracking in discerning reading comprehension challenges. Our findings exhibit the inherent potential to amplify eye movement research by significant proportions, accommodating participation from thousands of subjects with explicit consent. This scalability not only fosters advancements in vision research, but also extends its benefits to domains such as accessibility enhancement and healthcare applications.
Keyword: pruning
There is no result
Keyword: diffusion
A Survey of Diffusion Based Image Generation Models: Issues and Their Solutions
Authors: Tianyi Zhang, Zheng Wang, Jing Huang, Mohiuddin Muhammad Tasnim, Wei Shi
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Recently, there has been significant progress in the development of large models. Following the success of ChatGPT, numerous language models have been introduced, demonstrating remarkable performance. Similar advancements have also been observed in image generation models, such as Google's Imagen model, OpenAI's DALL-E 2, and stable diffusion models, which have exhibited impressive capabilities in generating images. However, similar to large language models, these models still encounter unresolved challenges. Fortunately, the availability of open-source stable diffusion models and their underlying mathematical principles has enabled the academic community to extensively analyze the performance of current image generation models and make improvements based on this stable diffusion framework. This survey aims to examine the existing issues and the current solutions pertaining to image generation models.
Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model
Authors: Xunpeng Yi, Han Xu, Hao Zhang, Linfeng Tang, Jiayi Ma
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
Abstract
In this paper, we rethink the low-light image enhancement task and propose a physically explainable and generative diffusion model for low-light image enhancement, termed as Diff-Retinex. We aim to integrate the advantages of the physical model and the generative network. Furthermore, we hope to supplement and even deduce the information missing in the low-light image through the generative network. Therefore, Diff-Retinex formulates the low-light image enhancement problem into Retinex decomposition and conditional image generation. In the Retinex decomposition, we integrate the superiority of attention in Transformer and meticulously design a Retinex Transformer decomposition network (TDN) to decompose the image into illumination and reflectance maps. Then, we design multi-path generative diffusion networks to reconstruct the normal-light Retinex probability distribution and solve the various degradations in these components respectively, including dark illumination, noise, color deviation, loss of scene contents, etc. Owing to generative diffusion model, Diff-Retinex puts the restoration of low-light subtle detail into practice. Extensive experiments conducted on real-world low-light datasets qualitatively and quantitatively demonstrate the effectiveness, superiority, and generalization of the proposed method.
EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior
Abstract
While the image diffusion model has made significant strides in text-driven 3D content creation, it often falls short in accurately capturing the intended meaning of the text prompt, particularly with respect to direction information. This shortcoming gives rise to the Janus problem, where multi-faced 3D models are produced with the guidance of such diffusion models. In this paper, we present a robust pipeline for generating high-fidelity 3D content with orthogonal-view image guidance. Specifically, we introduce a novel 2D diffusion model that generates an image consisting of four orthogonal-view sub-images for the given text prompt. The 3D content is then created with this diffusion model, which enhances 3D consistency and provides strong structured semantic priors. This addresses the infamous Janus problem and significantly promotes generation efficiency. Additionally, we employ a progressive 3D synthesis strategy that results in substantial improvement in the quality of the created 3D contents. Both quantitative and qualitative evaluations show that our method demonstrates a significant improvement over previous text-to-3D techniques.
Age of Information Diffusion on Social Networks: Optimizing Multi-Stage Seeding Strategies
Authors: Songhua Li, Lingjie Duan
Subjects: Social and Information Networks (cs.SI); Discrete Mathematics (cs.DM); Information Theory (cs.IT)
Abstract
To promote viral marketing, major social platforms (e.g., Facebook Marketplace and Pinduoduo) repeatedly select and invite different users (as seeds) in online social networks to share fresh information about a product or service with their friends. Thereby, we are motivated to optimize a multi-stage seeding process of viral marketing in social networks and adopt the recent notions of the peak and the average age of information (AoI) to measure the timeliness of promotion information received by network users. Our problem is different from the literature on information diffusion in social networks, which limits to one-time seeding and overlooks AoI dynamics or information replacement over time. As a critical step, we manage to develop closed-form expressions that characterize and trace AoI dynamics over any social network. For the peak AoI problem, we first prove the NP-hardness of our multi-stage seeding problem by a highly non-straightforward reduction from the dominating set problem, and then present a new polynomial-time algorithm that achieves good approximation guarantees (e.g., less than 2 for linear network topology). To minimize the average AoI, we also prove that our problem is NP-hard by properly reducing it from the set cover problem. Benefiting from our two-side bound analysis on the average AoI objective, we build up a new framework for approximation analysis and link our problem to a much simplified sum-distance minimization problem. This intriguing connection inspires us to develop another polynomial-time algorithm that achieves a good approximation guarantee. Additionally, our theoretical results are well corroborated by experiments on a real social network.
Distribution-Aligned Diffusion for Human Mesh Recovery
Authors: Lin Geng Foo, Jia Gong, Hossein Rahmani, Jun Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recovering a 3D human mesh from a single RGB image is a challenging task due to depth ambiguity and self-occlusion, resulting in a high degree of uncertainty. Meanwhile, diffusion models have recently seen much success in generating high-quality outputs by progressively denoising noisy inputs. Inspired by their capability, we explore a diffusion-based approach for human mesh recovery, and propose a Human Mesh Diffusion (HMDiff) framework which frames mesh recovery as a reverse diffusion process. We also propose a Distribution Alignment Technique (DAT) that injects input-specific distribution information into the diffusion process, and provides useful prior knowledge to simplify the mesh recovery task. Our method achieves state-of-the-art performance on three widely used datasets. Project page: https://gongjia0208.github.io/HMDiff/.
Keyword: adaptive
Generalizable Zero-Shot Speaker Adaptive Speech Synthesis with Disentangled Representations
Abstract
While most research into speech synthesis has focused on synthesizing high-quality speech for in-dataset speakers, an equally essential yet unsolved problem is synthesizing speech for unseen speakers who are out-of-dataset with limited reference data, i.e., speaker adaptive speech synthesis. Many studies have proposed zero-shot speaker adaptive text-to-speech and voice conversion approaches aimed at this task. However, most current approaches suffer from the degradation of naturalness and speaker similarity when synthesizing speech for unseen speakers (i.e., speakers not in the training dataset) due to the poor generalizability of the model in out-of-distribution data. To address this problem, we propose GZS-TV, a generalizable zero-shot speaker adaptive text-to-speech and voice conversion model. GZS-TV introduces disentangled representation learning for both speaker embedding extraction and timbre transformation to improve model generalization and leverages the representation learning capability of the variational autoencoder to enhance the speaker encoder. Our experiments demonstrate that GZS-TV reduces performance degradation on unseen speakers and outperforms all baseline models in multiple datasets.
AccFlow: Backward Accumulation for Long-Range Optical Flow
Authors: Guangyang Wu, Xiaohong Liu, Kunming Luo, Xi Liu, Qingqing Zheng, Shuaicheng Liu, Xinyang Jiang, Guangtao Zhai, Wenyi Wang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recent deep learning-based optical flow estimators have exhibited impressive performance in generating local flows between consecutive frames. However, the estimation of long-range flows between distant frames, particularly under complex object deformation and large motion occlusion, remains a challenging task. One promising solution is to accumulate local flows explicitly or implicitly to obtain the desired long-range flow. Nevertheless, the accumulation errors and flow misalignment can hinder the effectiveness of this approach. This paper proposes a novel recurrent framework called AccFlow, which recursively backward accumulates local flows using a deformable module called as AccPlus. In addition, an adaptive blending module is designed along with AccPlus to alleviate the occlusion effect by backward accumulation and rectify the accumulation error. Notably, we demonstrate the superiority of backward accumulation over conventional forward accumulation, which to the best of our knowledge has not been explicitly established before. To train and evaluate the proposed AccFlow, we have constructed a large-scale high-quality dataset named CVO, which provides ground-truth optical flow labels between adjacent and distant frames. Extensive experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation. Codes are available at https://github.com/mulns/AccFlow .
DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT
Abstract
Federated learning (FL) aims to collaboratively train a global model while ensuring client data privacy. However, FL faces challenges from the non-IID data distribution among clients. Clustered FL (CFL) has emerged as a promising solution, but most existing CFL frameworks adopt synchronous frameworks lacking asynchrony. An asynchronous CFL framework called SDAGFL based on directed acyclic graph distributed ledger techniques (DAG-DLT) was proposed, but its complete decentralization leads to high communication and storage costs. We propose DAG-ACFL, an asynchronous clustered FL framework based on directed acyclic graph distributed ledger techniques (DAG-DLT). We first detail the components of DAG-ACFL. A tip selection algorithm based on the cosine similarity of model parameters is then designed to aggregate models from clients with similar distributions. An adaptive tip selection algorithm leveraging change-point detection dynamically determines the number of selected tips. We evaluate the clustering and training performance of DAG-ACFL on multiple datasets and analyze its communication and storage costs. Experiments show the superiority of DAG-ACFL in asynchronous clustered FL. By combining DAG-DLT with clustered FL, DAG-ACFL realizes robust, decentralized and private model training with efficient performance.
Performance Analysis of Finite Blocklength Transmissions Over Wiretap Fading Channels: An Average Information Leakage Perspective
Authors: Milad Tatar Mamaghani, Xiangyun Zhou, Nan Yang, A. Lee Swindlehurst, H. Vincent Poor
Abstract
Physical-layer security (PLS) is a promising technique to complement communication security in beyond-5G wireless networks. However, PLS developments in current research are often based on the ideal assumption of infinite coding blocklengths or perfect knowledge of the wiretap link's channel state information (CSI). In this work, we study the performance of finite blocklength (FBL) transmissions using a new secrecy metric - the average information leakage (AIL). We evaluate the exact and approximate AIL with arbitrary signaling and fading channels, assuming that the eavesdropper's instantaneous CSI is unknown. We then conduct case studies that use artificial noise (AN) beamforming to thoroughly analyze the AIL in both Rayleigh and Rician fading channels. The accuracy of the analytical expressions is verified through extensive simulations, and various insights regarding the impact of key system parameters on the AIL are obtained. Particularly, our results reveal that allowing a small level of AIL can potentially lead to significant reliability improvements. To improve the system performance, we formulate and solve an average secrecy throughput (AST) optimization problem via both non-adaptive and adaptive design strategies. Our findings highlight the significance of blocklength design and AN power allocation, as well as the impact of their trade-off on the AST.
Approximation Algorithms to Enhance Social Sharing of Fresh Point-of-Interest Information
Authors: Songhua Li, Lingjie Duan
Subjects: Social and Information Networks (cs.SI); Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS)
Abstract
In location-based social networks (LBSNs), such as Gowalla and Waze, users sense urban point-of-interest (PoI) information (e.g., restaurants' queue length and real-time traffic conditions) in the vicinity and share such information with friends in online social networks. Given each user's social connections and the severe lags in disseminating fresh PoI to all users, major LBSNs aim to enhance users' social PoI sharing by selecting a subset $k$ out of all $m$ users as hotspots and broadcasting their PoI information to the entire user community. This motivates us to study a new combinatorial optimization problem by integrating two urban sensing and online social networks. We prove that this problem is NP-hard and also renders existing approximation solutions not viable. Through analyzing the interplay effects between the sensing and social networks, we successfully transform the involved PoI-sharing process across two networks to matrix computations for deriving a closed-form objective and present a polynomial-time algorithm to ensure ($1-\frac{m-2}{m}(\frac{k-1}{k})^k$) approximation of the optimum. Furthermore, we allow each selected user to move around and sense more PoI information to share. To this end, we propose an augmentation-adaptive algorithm, which benefits from a resource-augmented technique and achieves bounded approximation, ranging from $\frac{1}{k}(1-\frac{1}{e})$ to $1-\frac{1}{e}> 0.632$ by adjusting our augmentation factors. %Particularly when all sensing nodes are associated with users, we devise, by leveraging our augmentation-adaptive algorithm as a subroutine, an algorithm that eliminates the need for augmentation while still ensuring a satisfactory approximation $1-\frac{m-2}{m}(\frac{k-1}{k})^k$. Finally, our theoretical results are corroborated by our simulation findings using both synthetic and real-world datasets.
On Incentivizing Social Information Sharing in Routing Games
Authors: Songhua Li, Lingjie Duan
Subjects: Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA)
Abstract
We study a new incentive problem of social information sharing for location-based services (e.g., Waze and Yelp). The problem aims to crowdsource a mass of mobile users to learn massive point-of-interest (PoI) information while traveling and share it with each other as a public good. Given that crowdsourced users mind their own travel costs and possess various preferences over the PoI information along different paths, we formulate the problem as a non-atomic routing game with positive network externalities. We first show by price of anarchy (PoA) analysis that, in the absence of any incentive design, users' selfish routing on the path with the lowest cost will limit information diversity and lead to an arbitrarily large efficiency loss from the social optimum. This motivates us to explore effective incentive mechanisms to remedy while upholding individual rationality, incentive compatibility, and budget balance to ensure practical feasibility. We start by presenting an adaptive information restriction (AIR) mechanism that dynamically customizes restriction fractions, depending on the real user flows along different paths, to govern users' access to the shared PoI aggregation. We show that AIR achieves a PoA of 0.25 for homogeneous users (of identical PoI preferences over paths) and 0.125 for heterogeneous users in a typical network of two parallel paths. Further, we propose a side-payment mechanism (ASP) that adaptively charges or rewards users along certain paths. With those charges and rewards well-tailored, ASP significantly improves the PoA to 1 (optimal) and 0.5 for homogeneous and heterogeneous users in the two-path network, respectively. For a generalized network of multiple parallel paths, we further advance ASP to be able to guarantee a PoA of 0.5. Additionally, our theoretical results are well corroborated by our numerical findings.
Unlocking Fine-Grained Details with Wavelet-based High-Frequency Enhancement in Transformers
Abstract
Medical image segmentation is a critical task that plays a vital role in diagnosis, treatment planning, and disease monitoring. Accurate segmentation of anatomical structures and abnormalities from medical images can aid in the early detection and treatment of various diseases. In this paper, we address the local feature deficiency of the Transformer model by carefully re-designing the self-attention map to produce accurate dense prediction in medical images. To this end, we first apply the wavelet transformation to decompose the input feature map into low-frequency (LF) and high-frequency (HF) subbands. The LF segment is associated with coarse-grained features while the HF components preserve fine-grained features such as texture and edge information. Next, we reformulate the self-attention operation using the efficient Transformer to perform both spatial and context attention on top of the frequency representation. Furthermore, to intensify the importance of the boundary information, we impose an additional attention map by creating a Gaussian pyramid on top of the HF components. Moreover, we propose a multi-scale context enhancement block within skip connections to adaptively model inter-scale dependencies to overcome the semantic gap among stages of the encoder and decoder modules. Throughout comprehensive experiments, we demonstrate the effectiveness of our strategy on multi-organ and skin lesion segmentation benchmarks. The implementation code will be available upon acceptance. \href{https://github.com/mindflow-institue/WaveFormer}{GitHub}.
Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Abstract
Despite the recent success of Graph Neural Networks (GNNs), it remains challenging to train GNNs on large-scale graphs due to neighbor explosions. As a remedy, distributed computing becomes a promising solution by leveraging abundant computing resources (e.g., GPU). However, the node dependency of graph data increases the difficulty of achieving high concurrency in distributed GNN training, which suffers from the massive communication overhead. To address it, Historical value approximation is deemed a promising class of distributed training techniques. It utilizes an offline memory to cache historical information (e.g., node embedding) as an affordable approximation of the exact value and achieves high concurrency. However, such benefits come at the cost of involving dated training information, leading to staleness, imprecision, and convergence issues. To overcome these challenges, this paper proposes SAT (Staleness-Alleviated Training), a novel and scalable distributed GNN training framework that reduces the embedding staleness adaptively. The key idea of SAT is to model the GNN's embedding evolution as a temporal graph and build a model upon it to predict future embedding, which effectively alleviates the staleness of the cached historical embedding. We propose an online algorithm to train the embedding predictor and the distributed GNN alternatively and further provide a convergence analysis. Empirically, we demonstrate that SAT can effectively reduce embedding staleness and thus achieve better performance and convergence speed on multiple large-scale graph datasets.
Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers
Authors: Matthew Dutson, Yin Li, Mohit Gupta
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Vision Transformers achieve impressive accuracy across a range of visual recognition tasks. Unfortunately, their accuracy frequently comes with high computational costs. This is a particular issue in video recognition, where models are often applied repeatedly across frames or temporal chunks. In this work, we exploit temporal redundancy between subsequent inputs to reduce the cost of Transformers for video processing. We describe a method for identifying and re-processing only those tokens that have changed significantly over time. Our proposed family of models, Eventful Transformers, can be converted from existing Transformers (often without any re-training) and give adaptive control over the compute cost at runtime. We evaluate our method on large-scale datasets for video object detection (ImageNet VID) and action recognition (EPIC-Kitchens 100). Our approach leads to significant computational savings (on the order of 2-4x) with only minor reductions in accuracy.
Keyword: quantization
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Abstract
Large language models (LLMs) have revolutionized natural language processing tasks. However, their practical deployment is hindered by their immense memory and computation requirements. Although recent post-training quantization (PTQ) methods are effective in reducing memory footprint and improving the computational efficiency of LLM, they hand-craft quantization parameters, which leads to low performance and fails to deal with extremely low-bit quantization. To tackle this issue, we introduce an Omnidirectionally calibrated Quantization (OmniQuant) technique for LLMs, which achieves good performance in diverse quantization settings while maintaining the computational efficiency of PTQ by efficiently optimizing various quantization parameters. OmniQuant comprises two innovative components including Learnable Weight Clipping (LWC) and Learnable Equivalent Transformation (LET). LWC modulates the extreme values of weights by optimizing the clipping threshold. Meanwhile, LET tackles activation outliers by shifting the challenge of quantization from activations to weights through a learnable equivalent transformation. Operating within a differentiable framework using block-wise error minimization, OmniQuant can optimize the quantization process efficiently for both weight-only and weight-activation quantization. For instance, the LLaMA-2 model family with the size of 7-70B can be processed with OmniQuant on a single A100-40G GPU within 1-16 hours using 128 samples. Extensive experiments validate OmniQuant's superior performance across diverse quantization configurations such as W4A4, W6A6, W4A16, W3A16, and W2A16. Additionally, OmniQuant demonstrates effectiveness in instruction-tuned models and delivers notable improvements in inference speed and memory reduction on real devices. Codes and models are available at \url{https://github.com/OpenGVLab/OmniQuant}.
A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
Authors: Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig
Abstract
We present accumulator-aware quantization (A2Q), a novel weight quantization method designed to train quantized neural networks (QNNs) to avoid overflow when using low-precision accumulators during inference. A2Q introduces a unique formulation inspired by weight normalization that constrains the L1-norm of model weights according to accumulator bit width bounds that we derive. Thus, in training QNNs for low-precision accumulation, A2Q also inherently promotes unstructured weight sparsity to guarantee overflow avoidance. We apply our method to deep learning-based computer vision tasks to show that A2Q can train QNNs for low-precision accumulators while maintaining model accuracy competitive with a floating-point baseline. In our evaluations, we consider the impact of A2Q on both general-purpose platforms and programmable hardware. However, we primarily target model deployment on FPGAs because they can be programmed to fully exploit custom accumulator bit widths. Our experimentation shows accumulator bit width significantly impacts the resource efficiency of FPGA-based accelerators. On average across our benchmarks, A2Q offers up to a 2.3x reduction in resource utilization over 32-bit accumulator counterparts with 99.2% of the floating-point model accuracy.
Keyword: efficient
Experimental Evaluation of Bird Strikes in Urban Air Mobility
Estimating Treatment Effects Using Costly Simulation Samples from a Population-Scale Model of Opioid Use Disorder
Bayesian low-rank adaptation for large language models
Business Metric-Aware Forecasting for Inventory Management
DebtViz: A Tool for Identifying, Measuring, Visualizing, and Monitoring Self-Admitted Technical Debt
Accelerating Continuous Integration with Parallel Batch Testing
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
MatchXML: An Efficient Text-label Matching Framework for Extreme Multi-label Text Classification
The time dimensional reduction method to determine the initial conditions without the knowledge of damping coefficients
DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT
Discovering Dichotomies for Problems in Database Theory
Using Adamic-Adar Index Algorithm to Predict Volunteer Collaboration: Less is More
Falcon: Accelerating Homomorphically Encrypted Convolutions for Efficient Private Mobile Network Inference
Self-supervised learning for hotspot detection and isolation from thermal images
Design and Control of a Bio-inspired Wheeled Bipedal Robot
LLM2KB: Constructing Knowledge Bases using instruction tuned context aware Large Language Models
Predictive Network Configuration with Hierarchical Spectral Clustering for Software Defined Vehicles
Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
Significant-attributed Community Search in Heterogeneous Information Networks
Model-free Reinforcement Learning with Stochastic Reward Stabilization for Recommender Systems
A Bayesian Active Learning Approach to Comparative Judgement
Learning Compact Neural Networks with Deep Overparameterised Multitask Learning
A Study on Hyperparameters Configurations for an Efficient Human Activity Recognition System
iCub Detecting Gazed Objects: A Pipeline Estimating Human Attention
SVQNet: Sparse Voxel-Adjacent Query Network for 4D Spatio-Temporal LiDAR Semantic Segmentation
ConSlide: Asynchronous Hierarchical Interaction Transformer with Breakup-Reorganize Rehearsal for Continual Whole Slide Image Analysis
Assessing Keyness using Permutation Tests
R
packagekeyperm
available from github.SoTaNa: The Open-Source Software Development Assistant
Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and Research Opportunities
Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models
Unlocking Fine-Grained Details with Wavelet-based High-Frequency Enhancement in Transformers
Learning How to Price Charging in Electric Ride-Hailing Markets
Stand-alone Multigrid for Helmholtz Revisited: Towards Convergence Using Standard Components
Multi-Focus Querying of the Human Genome Information on Desktop and in Virtual Reality: an Evaluation
TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs
Ultrafast-and-Ultralight ConvNet-Based Intelligent Monitoring System for Diagnosing Early-Stage Mpox Anytime and Anywhere
A Poisson-Based Approximation Algorithm for Stochastic Bin Packing of Bernoulli Items
Keyword: faster
Estimating Treatment Effects Using Costly Simulation Samples from a Population-Scale Model of Opioid Use Disorder
Significant-attributed Community Search in Heterogeneous Information Networks
Training normalizing flows with computationally intensive target probability distributions
Escaping the Sample Trap: Fast and Accurate Epistemic Uncertainty Estimation with Pairwise-Distance Estimators
Keyword: mobile
Data-driven Storytelling in Hybrid Immersive Display Environments
Falcon: Accelerating Homomorphically Encrypted Convolutions for Efficient Private Mobile Network Inference
On Incentivizing Social Information Sharing in Routing Games
A Study on Hyperparameters Configurations for an Efficient Human Activity Recognition System
Ultrafast-and-Ultralight ConvNet-Based Intelligent Monitoring System for Diagnosing Early-Stage Mpox Anytime and Anywhere
Open Gaze: An Open-Source Implementation Replicating Google's Eye Tracking Paper
Keyword: pruning
There is no result
Keyword: diffusion
A Survey of Diffusion Based Image Generation Models: Issues and Their Solutions
Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model
EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior
Age of Information Diffusion on Social Networks: Optimizing Multi-Stage Seeding Strategies
Distribution-Aligned Diffusion for Human Mesh Recovery
Keyword: adaptive
Generalizable Zero-Shot Speaker Adaptive Speech Synthesis with Disentangled Representations
AccFlow: Backward Accumulation for Long-Range Optical Flow
DAG-ACFL: Asynchronous Clustered Federated Learning based on DAG-DLT
Performance Analysis of Finite Blocklength Transmissions Over Wiretap Fading Channels: An Average Information Leakage Perspective
Approximation Algorithms to Enhance Social Sharing of Fresh Point-of-Interest Information
On Incentivizing Social Information Sharing in Routing Games
Unlocking Fine-Grained Details with Wavelet-based High-Frequency Enhancement in Transformers
Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Eventful Transformers: Leveraging Temporal Redundancy in Vision Transformers
Keyword: quantization
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance