Abstract
Truss structures at macro-scale are common in a number of engineering applications and are now being increasingly used at the micro-scale to construct metamaterials. In analyzing the properties of a given truss structure, it is often necessary to understand how stress waves propagate through the system and/or its dynamic modes under time dependent loading so as to allow for maximally efficient use of space and material. This can be a computationally challenging task for particularly large or complex structures, with current methods requiring fine spatial discretization or evaluations of sizable matrices. Here we present a spectral method to compute the dynamics of trusses inspired by results from fluid flow networks. Our model accounts for the full dynamics of linearly elastic truss elements via a network Laplacian; a matrix object which couples the motions of the structure joints. We show that this method is equivalent to the continuum limit of linear finite element methods as well as capable of reproducing natural frequencies and modes determined by more complex and computationally costlier methods.
RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model
Authors: Ahmad Sebaq, Mohamed ElHelw
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Satellite imagery generation and super-resolution are pivotal tasks in remote sensing, demanding high-quality, detailed images for accurate analysis and decision-making. In this paper, we propose an innovative and lightweight approach that employs two-stage diffusion models to gradually generate high-resolution Satellite images purely based on text prompts. Our innovative pipeline comprises two interconnected diffusion models: a Low-Resolution Generation Diffusion Model (LR-GDM) that generates low-resolution images from text and a Super-Resolution Diffusion Model (SRDM) conditionally produced. The LR-GDM effectively synthesizes low-resolution by (computing the correlations of the text embedding and the image embedding in a shared latent space), capturing the essential content and layout of the desired scenes. Subsequently, the SRDM takes the generated low-resolution image and its corresponding text prompts and efficiently produces the high-resolution counterparts, infusing fine-grained spatial details and enhancing visual fidelity. Experiments are conducted on the commonly used dataset, Remote Sensing Image Captioning Dataset (RSICD). Our results demonstrate that our approach outperforms existing state-of-the-art (SoTA) models in generating satellite images with realistic geographical features, weather conditions, and land structures while achieving remarkable super-resolution results for increased spatial precision.
Effective Multi-Graph Neural Networks for Illicit Account Detection on Cryptocurrency Transaction Networks
Authors: Zhihao Ding, Jieming Shi, Qing Li, Jiannong Cao
Abstract
We study illicit account detection on transaction networks of cryptocurrencies that are increasi_testngly important in online financial markets. The surge of illicit activities on cryptocurrencies has resulted in billions of losses from normal users. Existing solutions either rely on tedious feature engineering to get handcrafted features, or are inadequate to fully utilize the rich semantics of cryptocurrency transaction data, and consequently, yield sub-optimal performance. In this paper, we formulate the illicit account detection problem as a classification task over directed multigraphs with edge attributes, and present DIAM, a novel multi-graph neural network model to effectively detect illicit accounts on large transaction networks. First, DIAM includes an Edge2Seq module that automatically learns effective node representations preserving intrinsic transaction patterns of parallel edges, by considering both edge attributes and directed edge sequence dependencies. Then utilizing the multigraph topology, DIAM employs a new Multigraph Discrepancy (MGD) module with a well-designed message passing mechanism to capture the discrepant features between normal and illicit nodes, supported by an attention mechanism. Assembling all techniques, DIAM is trained in an end-to-end manner. Extensive experiments, comparing against 14 existing solutions on 4 large cryptocurrency datasets of Bitcoin and Ethereum, demonstrate that DIAM consistently achieves the best performance to accurately detect illicit accounts, while being efficient. For instance, on a Bitcoin dataset with 20 million nodes and 203 million edges, DIAM achieves F1 score 96.55%, significantly higher than the F1 score 83.92% of the best competitor.
Towards User Guided Actionable Recourse
Authors: Jayanth Yetukuri, Ian Hardy, Yang Liu
Subjects: Machine Learning (cs.LG); Computers and Society (cs.CY)
Abstract
Machine Learning's proliferation in critical fields such as healthcare, banking, and criminal justice has motivated the creation of tools which ensure trust and transparency in ML models. One such tool is Actionable Recourse (AR) for negatively impacted users. AR describes recommendations of cost-efficient changes to a user's actionable features to help them obtain favorable outcomes. Existing approaches for providing recourse optimize for properties such as proximity, sparsity, validity, and distance-based costs. However, an often-overlooked but crucial requirement for actionability is a consideration of User Preference to guide the recourse generation process. In this work, we attempt to capture user preferences via soft constraints in three simple forms: i) scoring continuous features, ii) bounding feature values and iii) ranking categorical features. Finally, we propose a gradient-based approach to identify User Preferred Actionable Recourse (UP-AR). We carried out extensive experiments to verify the effectiveness of our approach.
Integrated Photonic AI Accelerators under Hardware Security Attacks: Impacts and Countermeasures
Authors: Felipe Gohring de Magalhães, Mahdi Nikdast, Gabriela Nicolescu
Abstract
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Diffusion-based Time Series Data Imputation for Microsoft 365
Authors: Fangkai Yang, Wenjie Yin, Lu Wang, Tianci Li, Pu Zhao, Bo Liu, Paul Wang, Bo Qiao, Yudong Liu, Mårten Björkman, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang
Abstract
Reliability is extremely important for large-scale cloud systems like Microsoft 365. Cloud failures such as disk failure, node failure, etc. threaten service reliability, resulting in online service interruptions and economic loss. Existing works focus on predicting cloud failures and proactively taking action before failures happen. However, they suffer from poor data quality like data missing in model training and prediction, which limits the performance. In this paper, we focus on enhancing data quality through data imputation by the proposed Diffusion+, a sample-efficient diffusion model, to impute the missing data efficiently based on the observed data. Our experiments and application practice show that our model contributes to improving the performance of the downstream failure prediction task.
Causal Structure Recovery of Linear Dynamical Systems: An FFT based Approach
Authors: Mishfad Shaikh Veedu, James Melbourne, Murti V. Salapaka
Subjects: Machine Learning (cs.LG); Dynamical Systems (math.DS); Methodology (stat.ME)
Abstract
Learning causal effects from data is a fundamental and well-studied problem across science, especially when the cause-effect relationship is static in nature. However, causal effect is less explored when there are dynamical dependencies, i.e., when dependencies exist between entities across time. Identifying dynamic causal effects from time-series observations is computationally expensive when compared to the static scenario. We demonstrate that the computational complexity of recovering the causation structure for the vector auto-regressive (VAR) model is $O(Tn^3N^2)$, where $n$ is the number of nodes, $T$ is the number of samples, and $N$ is the largest time-lag in the dependency between entities. We report a method, with a reduced complexity of $O(Tn^3 \log N)$, to recover the causation structure to obtain frequency-domain (FD) representations of time-series. Since FFT accumulates all the time dependencies on every frequency, causal inference can be performed efficiently by considering the state variables as random variables at any given frequency. We additionally show that, for systems with interactions that are LTI, do-calculus machinery can be realized in the FD resulting in versions of the classical single-door (with cycles), front and backdoor criteria. We demonstrate, for a large class of problems, graph reconstruction using multivariate Wiener projections results in a significant computational advantage with $O(n)$ complexity over reconstruction algorithms such as the PC algorithm which has $O(n^q)$ complexity, where $q$ is the maximum neighborhood size. This advantage accrues due to some remarkable properties of the phase response of the frequency-dependent Wiener coefficients which is not present in any time-domain approach.
Experience Capture in Shipbuilding through Computer Applications and Neural Networks
Authors: Sankaramangalam Ulhas Sangeet, Sivaprasad K, Yashwant R. Kamath
Abstract
It has always been a severe loss for any establishment when an experienced hand retires or moves to another firm. The specific details of what his job/position entails will always make the work more efficient. To curtail such losses, it is possible to implement a system that takes input from a new employee regarding the challenges he/she is facing and match it to a previous occurrence where someone else held his/her chair. This system could be made possible with input through the ages from the array of individuals who managed that particular job and processing this data through a neural network that recognizes the pattern. The paper is based on data collected from traditional wooden dhow builders and some of the modern day unconventional shipyards. Since the requirements for successful implementation in such scenarios seems too steep at the moment, an alternate approach has been suggested by implementation through the design processes across multiple shipyards. The process entails the traditional value passed down through generations regarding a particular profession and analysis has been done regarding how this knowledge/experience can be captured and preserved for future generations to work upon. A series of tools including SharePoint, MATLAB, and some similar software working in tandem can be used for the design of the same. This research will provide valuable insight as to how information sharing can be applied through generations for effective application of production capabilities.
Detection of Unknown-Unknowns in Cyber-Physical Systems using Statistical Conformance with Physics Guided Process Models
Abstract
Unknown unknowns are operational scenarios in a cyber-physical system that are not accounted for in the design and test phase. As such under unknown-unknown scenarios, the operational behavior of the CPS is not guaranteed to meet requirements such as safety and efficacy specified using Signal Temporal Logic (STL) on the output trajectories. We propose a novel framework for analyzing the stochastic conformance of operational output characteristics of safety-critical cyber-physical systems that can discover unknown-unknown scenarios and evaluate potential safety hazards. We propose dynamics-induced hybrid recurrent neural networks (DiH-RNN) to mine a physics-guided surrogate model (PGSM) which is used to check the model conformance using STL on the model coefficients. We demonstrate the detection of operational changes in an Artificial Pancreas(AP) due to unknown insulin cartridge errors.
Screening of Pneumonia and Urinary Tract Infection at Triage using TriNet
Authors: Stephen Z. Lu
Subjects: Machine Learning (cs.LG); Computers and Society (cs.CY)
Abstract
Due to the steady rise in population demographics and longevity, emergency department visits are increasing across North America. As more patients visit the emergency department, traditional clinical workflows become overloaded and inefficient, leading to prolonged wait-times and reduced healthcare quality. One of such workflows is the triage medical directive, impeded by limited human workload, inaccurate diagnoses and invasive over-testing. To address this issue, we propose TriNet: a machine learning model for medical directives that automates first-line screening at triage for conditions requiring downstream testing for diagnosis confirmation. To verify screening potential, TriNet was trained on hospital triage data and achieved high positive predictive values in detecting pneumonia (0.86) and urinary tract infection (0.93). These models outperform current clinical benchmarks, indicating that machine-learning medical directives can offer cost-free, non-invasive screening with high specificity for common conditions, reducing the risk of over-testing while increasing emergency department efficiency.
Distributed Variational Inference for Online Supervised Learning
Authors: Parth Paritosh, Nikolay Atanasov, Sonia Martinez
Abstract
Developing efficient solutions for inference problems in intelligent sensor networks is crucial for the next generation of location, tracking, and mapping services. This paper develops a scalable distributed probabilistic inference algorithm that applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks. In a centralized setting, variational inference is a fundamental technique for performing approximate Bayesian estimation, in which an intractable posterior density is approximated with a parametric density. Our key contribution lies in the derivation of a separable lower bound on the centralized estimation objective, which enables distributed variational inference with one-hop communication in a sensor network. Our distributed evidence lower bound (DELBO) consists of a weighted sum of observation likelihood and divergence to prior densities, and its gap to the measurement evidence is due to consensus and modeling errors. To solve binary classification and regression problems while handling streaming data, we design an online distributed algorithm that maximizes DELBO, and specialize it to Gaussian variational densities with non-linear likelihoods. The resulting distributed Gaussian variational inference (DGVI) efficiently inverts a $1$-rank correction to the covariance matrix. Finally, we derive a diagonalized version for online distributed inference in high-dimensional models, and apply it to multi-robot probabilistic mapping using indoor LiDAR data.
DAMM: Directionality-Aware Mixture Model Parallel Sampling for Efficient Dynamical System Learning
Abstract
The Linear Parameter Varying Dynamical System (LPV-DS) is a promising framework for learning stable time-invariant motion policies in robot control. By employing statistical modeling and semi-definite optimization, LPV-DS encodes complex motions via non-linear DS, ensuring the robustness and stability of the system. However, the current LPV-DS scheme faces challenges in accurately interpreting trajectory data while maintaining model efficiency and computational efficiency. To address these limitations, we propose the Directionality-aware Mixture Model (DAMM), a new statistical model that leverages Riemannian metric on $d$-dimensional sphere $\mathbb{S}^d$, and efficiently incorporates non-Euclidean directional information with position. Additionally, we introduce a hybrid Markov chain Monte Carlo method that combines the Gibbs Sampling and the Split/Merge Proposal, facilitating parallel computation and enabling faster inference for near real-time learning performance. Through extensive empirical validation, we demonstrate that the improved LPV-DS framework with DAMM is capable of producing physically-meaningful representations of the trajectory data and improved performance of the generated DS while showcasing significantly enhanced learning speed compared to its previous iterations.
Generative Algorithms for Fusion of Physics-Based Wildfire Spread Models with Satellite Data for Initializing Wildfire Forecasts
Authors: Bryan Shaddy, Deep Ray, Angel Farguell, Valentina Calaza, Jan Mandel, James Haley, Kyle Hilburn, Derek V. Mallia, Adam Kochanski, Assad Oberai
Subjects: Machine Learning (cs.LG); Atmospheric and Oceanic Physics (physics.ao-ph)
Abstract
Increases in wildfire activity and the resulting impacts have prompted the development of high-resolution wildfire behavior models for forecasting fire spread. Recent progress in using satellites to detect fire locations further provides the opportunity to use measurements to improve fire spread forecasts from numerical models through data assimilation. This work develops a method for inferring the history of a wildfire from satellite measurements, providing the necessary information to initialize coupled atmosphere-wildfire models from a measured wildfire state in a physics-informed approach. The fire arrival time, which is the time the fire reaches a given spatial location, acts as a succinct representation of the history of a wildfire. In this work, a conditional Wasserstein Generative Adversarial Network (cWGAN), trained with WRF-SFIRE simulations, is used to infer the fire arrival time from satellite active fire data. The cWGAN is used to produce samples of likely fire arrival times from the conditional distribution of arrival times given satellite active fire detections. Samples produced by the cWGAN are further used to assess the uncertainty of predictions. The cWGAN is tested on four California wildfires occurring between 2020 and 2022, and predictions for fire extent are compared against high resolution airborne infrared measurements. Further, the predicted ignition times are compared with reported ignition times. An average Sorensen's coefficient of 0.81 for the fire perimeters and an average ignition time error of 32 minutes suggest that the method is highly accurate.
Compressing Vision Transformers for Low-Resource Visual Learning
Authors: Eric Youn, Sai Mitheran J, Sanjana Prabhu, Siyuan Chen
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Vision transformer (ViT) and its variants have swept through visual learning leaderboards and offer state-of-the-art accuracy in tasks such as image classification, object detection, and semantic segmentation by attending to different parts of the visual input and capturing long-range spatial dependencies. However, these models are large and computation-heavy. For instance, the recently proposed ViT-B model has 86M parameters making it impractical for deployment on resource-constrained devices. As a result, their deployment on mobile and edge scenarios is limited. In our work, we aim to take a step toward bringing vision transformers to the edge by utilizing popular model compression techniques such as distillation, pruning, and quantization. Our chosen application environment is an unmanned aerial vehicle (UAV) that is battery-powered and memory-constrained, carrying a single-board computer on the scale of an NVIDIA Jetson Nano with 4GB of RAM. On the other hand, the UAV requires high accuracy close to that of state-of-the-art ViTs to ensure safe object avoidance in autonomous navigation, or correct localization of humans in search-and-rescue. Inference latency should also be minimized given the application requirements. Hence, our target is to enable rapid inference of a vision transformer on an NVIDIA Jetson Nano (4GB) with minimal accuracy loss. This allows us to deploy ViTs on resource-constrained devices, opening up new possibilities in surveillance, environmental monitoring, etc. Our implementation is made available at https://github.com/chensy7/efficient-vit.
Efficient Maximum $k$-Defective Clique Computation with Improved Time Complexity
Authors: Lijun Chang
Subjects: Data Structures and Algorithms (cs.DS); Social and Information Networks (cs.SI)
Abstract
$k$-defective cliques relax cliques by allowing up-to $k$ missing edges from being a complete graph. This relaxation enables us to find larger near-cliques and has applications in link prediction, cluster detection, social network analysis and transportation science. The problem of finding the largest $k$-defective clique has been recently studied with several algorithms being proposed in the literature. However, the currently fastest algorithm KDBB does not improve its time complexity from being the trivial $O(2^n)$, and also, KDBB's practical performance is still not satisfactory. In this paper, we advance the state of the art for exact maximum $k$-defective clique computation, in terms of both time complexity and practical performance. Moreover, we separate the techniques required for achieving the time complexity from others purely used for practical performance consideration; this design choice may facilitate the research community to further improve the practical efficiency while not sacrificing the worst case time complexity. In specific, we first develop a general framework kDC that beats the trivial time complexity of $O(2^n)$ and achieves a better time complexity than all existing algorithms. The time complexity of kDC is solely achieved by non-fully-adjacent-first branching rule, excess-removal reduction rule and high-degree reduction rule. Then, to make kDC practically efficient, we further propose a new upper bound, two reduction rules, and an algorithm for efficiently computing a large initial solution. Extensive empirical studies on three benchmark graph collections with $290$ graphs in total demonstrate that kDC outperforms the currently fastest algorithm KDBB by several orders of magnitude.
Joint Beamforming and Power Allocation for RIS Aided Full-Duplex Integrated Sensing and Uplink Communication System
Authors: Yuan Guo, Yang Liu, Qingqing Wu, Xiaoyang Li, Qingjiang Shi
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
Integrated sensing and communication (ISAC) capability is envisioned as one key feature for future cellular networks. Classical half-duplex (HD) radar sensing is conducted in a "first-emit-then-listen" manner. One challenge to realize HD ISAC lies in the discrepancy of the two systems' time scheduling for transmitting and receiving. This difficulty can be overcome by full-duplex (FD) transceivers. Besides, ISAC generally has to comprise its communication rate due to realizing sensing functionality. This loss can be compensated by the emerging reconfigurable intelligent surface (RIS) technology. This paper considers the joint design of beamforming, power allocation and signal processing in a FD uplink communication system aided by RIS, which is a highly nonconvex problem. To resolve this challenge, via leveraging the cutting-the-edge majorization-minimization (MM) and penalty-dual-decomposition (PDD) methods, we develop an iterative solution that optimizes all variables via using convex optimization techniques. Besides, by wisely exploiting alternative direction method of multipliers (ADMM) and optimality analysis, we further develop a low complexity solution that updates all variables analytically and runs highly efficiently. Numerical results are provided to verify the effectiveness and efficiency of our proposed algorithms and demonstrate the significant performance boosting by employing RIS in the FD ISAC system.
Energy stable and maximum bound principle preserving schemes for the Q-tensor flow of liquid crystals
Authors: Dianming Hou, Xiaoli Li, Zhonghua Qiao, Nan Zheng
Abstract
In this paper, we propose two efficient fully-discrete schemes for Q-tensor flow of liquid crystals by using the first- and second-order stabilized exponential scalar auxiliary variable (sESAV) approach in time and the finite difference method for spatial discretization. The modified discrete energy dissipation laws are unconditionally satisfied for both two constructed schemes. A particular feature is that, for two-dimensional (2D) and a kind of three-dimensional (3D) Q-tensor flows, the unconditional maximum-bound-principle (MBP) preservation of the constructed first-order scheme is successfully established, and the proposed second-order scheme preserves the discrete MBP property with a mild restriction on the time-step sizes. Furthermore, we rigorously derive the corresponding error estimates for the fully-discrete second-order schemes by using the built-in stability results. Finally, various numerical examples validating the theoretical results, such as the orientation of liquid crystal in 2D and 3D, are presented for the constructed schemes.
Episodic Logit-Q Dynamics for Efficient Learning in Stochastic Teams
Authors: Onur Unlu, Muhammed O. Sayin
Subjects: Computer Science and Game Theory (cs.GT)
Abstract
We present new learning dynamics combining (independent) log-linear learning and value iteration for stochastic games within the auxiliary stage game framework. The dynamics presented provably attain the efficient equilibrium (also known as optimal equilibrium) in identical-interest stochastic games, beyond the recent concentration of progress on provable convergence to some (possibly inefficient) equilibrium. The dynamics are also independent in the sense that agents take actions consistent with their local viewpoint to a reasonable extent rather than seeking equilibrium. These aspects can be of practical interest in the control applications of intelligent and autonomous systems. The key challenges are the convergence to an inefficient equilibrium and the non-stationarity of the environment from a single agent's viewpoint due to the adaptation of others. The log-linear update plays an important role in addressing the former. We address the latter through the play-in-episodes scheme in which the agents update their Q-function estimates only at the end of the episodes.
Efficient Training for Visual Tracking with Deformable Transformer
Authors: Qingmao Wei, Guotian Zeng, Bi Zeng
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recent Transformer-based visual tracking models have showcased superior performance. Nevertheless, prior works have been resource-intensive, requiring prolonged GPU training hours and incurring high GFLOPs during inference due to inefficient training methods and convolution-based target heads. This intensive resource use renders them unsuitable for real-world applications. In this paper, we present DETRack, a streamlined end-to-end visual object tracking framework. Our framework utilizes an efficient encoder-decoder structure where the deformable transformer decoder acting as a target head, achieves higher sparsity than traditional convolution heads, resulting in decreased GFLOPs. For training, we introduce a novel one-to-many label assignment and an auxiliary denoising technique, significantly accelerating model's convergence. Comprehensive experiments affirm the effectiveness and efficiency of our proposed method. For instance, DETRack achieves 72.9% AO on challenging GOT-10k benchmarks using only 20% of the training epochs required by the baseline, and runs with lower GFLOPs than all the transformer-based trackers.
Stacked Intelligent Metasurfaces for Multiuser Downlink Beamforming in the Wave Domain
Authors: Jiancheng An, Marco Di Renzo, Merouane Debbah, H. Vincent Poor, Chau Yuen
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
Intelligent metasurface has recently emerged as a promising technology that enables the customization of wireless environments by harnessing large numbers of inexpensive configurable scattering elements. However, prior studies have predominantly focused on single-layer metasurfaces, which have limitations in terms of the number of beam patterns they can steer accurately due to practical hardware restrictions. In contrast, this paper introduces a novel stacked intelligent metasurface (SIM) design. Specifically, we investigate the integration of SIM into the downlink of a multiuser multiple-input single-output (MISO) communication system, where a SIM, consisting of a multilayer metasurface structure, is deployed at the base station (BS) to facilitate transmit beamforming in the electromagnetic wave domain. This eliminates the need for conventional digital beamforming and high-resolution digital-to-analog converters at the BS. To this end, we formulate an optimization problem that aims to maximize the sum rate of all user equipments by jointly optimizing the transmit power allocation at the BS and the wave-based beamforming at the SIM, subject to both the transmit power budget and discrete phase shift constraints. Furthermore, we propose a computationally efficient algorithm for solving this joint optimization problem and elaborate on the potential benefits of employing SIM in wireless networks. Finally, the numerical results corroborate the effectiveness of the proposed SIM-enabled wave-based beamforming design and evaluate the performance improvement achieved by the proposed algorithm compared to various benchmark schemes. It is demonstrated that considering the same number of transmit antennas, the proposed SIM-based system achieves about 200\% improvement in terms of sum rate compared to conventional MISO systems.
Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic Extension
Abstract
Symmetry, a fundamental concept to understand our environment, often oversimplifies reality from a mathematical perspective. Humans are a prime example, deviating from perfect symmetry in terms of appearance and cognitive biases (e.g. having a dominant hand). Nevertheless, our brain can easily overcome these imperfections and efficiently adapt to symmetrical tasks. The driving motivation behind this work lies in capturing this ability through reinforcement learning. To this end, we introduce Adaptive Symmetry Learning (ASL) $\unicode{x2013}$ a model-minimization actor-critic extension that addresses incomplete or inexact symmetry descriptions by adapting itself during the learning process. ASL consists of a symmetry fitting component and a modular loss function that enforces a common symmetric relation across all states while adapting to the learned policy. The performance of ASL is compared to existing symmetry-enhanced methods in a case study involving a four-legged ant model for multidirectional locomotion tasks. The results demonstrate that ASL is capable of recovering from large perturbations and generalizing knowledge to hidden symmetric states. It achieves comparable or better performance than alternative methods in most scenarios, making it a valuable approach for leveraging model symmetry while compensating for inherent perturbations.
Gesture-Informed Robot Assistance via Foundation Models
Abstract
Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them. Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for interpreting deictic gestures in table-top manipulation tasks and demonstrate that it is both effective and preferred by users, achieving 70% higher success rates than the baseline. We further demonstrate GIRAF's ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved 81% success rate on finding the correct plan for tasks in GestureInstruct. Website: https://tinyurl.com/giraf23
MLN-net: A multi-source medical image segmentation method for clustered microcalcifications using multiple layer normalization
Authors: Ke Wang, Zanting Ye, Xiang Xie, Haidong Cui, Tao Chen, Banteng Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Accurate segmentation of clustered microcalcifications in mammography is crucial for the diagnosis and treatment of breast cancer. Despite exhibiting expert-level accuracy, recent deep learning advancements in medical image segmentation provide insufficient contribution to practical applications, due to the domain shift resulting from differences in patient postures, individual gland density, and imaging modalities of mammography etc. In this paper, a novel framework named MLN-net, which can accurately segment multi-source images using only single source images, is proposed for clustered microcalcification segmentation. We first propose a source domain image augmentation method to generate multi-source images, leading to improved generalization. And a structure of multiple layer normalization (LN) layers is used to construct the segmentation network, which can be found efficient for clustered microcalcification segmentation in different domains. Additionally, a branch selection strategy is designed for measuring the similarity of the source domain data and the target domain data. To validate the proposed MLN-net, extensive analyses including ablation experiments are performed, comparison of 12 baseline methods. Extensive experiments validate the effectiveness of MLN-net in segmenting clustered microcalcifications from different domains and the its segmentation accuracy surpasses state-of-the-art methods. Code will be available at https://github.com/yezanting/MLN-NET-VERSON1.
Pre- and post-contact policy decomposition for non-prehensile manipulation with zero-shot sim-to-real transfer
Authors: Minchan Kim, Junhyek Han, Jaehyung Kim, Beomjoon Kim
Abstract
We present a system for non-prehensile manipulation that require a significant number of contact mode transitions and the use of environmental contacts to successfully manipulate an object to a target location. Our method is based on deep reinforcement learning which, unlike state-of-the-art planning algorithms, does not require apriori knowledge of the physical parameters of the object or environment such as friction coefficients or centers of mass. The planning time is reduced to the simple feed-forward prediction time on a neural network. We propose a computational structure, action space design, and curriculum learning scheme that facilitates efficient exploration and sim-to-real transfer. In challenging real-world non-prehensile manipulation tasks, we show that our method can generalize over different objects, and succeed even for novel objects not seen during training. Project website: https://sites.google.com/view/nonprenehsile-decomposition
Improving Code Generation by Dynamic Temperature Sampling
Authors: Yuqi Zhu, Jia Allen Li, Ge Li, YunFei Zhao, Jia Li, Zhi Jin, Hong Mei
Subjects: Software Engineering (cs.SE); Computation and Language (cs.CL)
Abstract
Recently, Large Language Models (LLMs) have shown impressive results in code generation. However, existing decoding strategies are designed for Natural Language (NL) generation, overlooking the differences between NL and programming languages (PL). Due to this oversight, a better decoding strategy for code generation remains an open question. In this paper, we conduct the first systematic study to explore a decoding strategy specialized in code generation. With an analysis of loss distributions of code tokens, we find that code tokens can be divided into two categories: challenging tokens that are difficult to predict and confident tokens that can be easily inferred. Among them, the challenging tokens mainly appear at the beginning of a code block. Inspired by the above findings, we propose a simple yet effective method: Adaptive Temperature (AdapT) sampling, which dynamically adjusts the temperature coefficient when decoding different tokens. We apply a larger temperature when sampling for challenging tokens, allowing LLMs to explore diverse choices. We employ a smaller temperature for confident tokens avoiding the influence of tail randomness noises. We apply AdapT sampling to LLMs with different sizes and conduct evaluations on two popular datasets. Results show that AdapT sampling significantly outperforms state-of-the-art decoding strategy.
Diffusion Model is Secretly a Training-free Open Vocabulary Semantic Segmenter
Abstract
Recent research has explored the utilization of pre-trained text-image discriminative models, such as CLIP, to tackle the challenges associated with open-vocabulary semantic segmentation. However, it is worth noting that the alignment process based on contrastive learning employed by these models may unintentionally result in the loss of crucial localization information and object completeness, which are essential for achieving accurate semantic segmentation. More recently, there has been an emerging interest in extending the application of diffusion models beyond text-to-image generation tasks, particularly in the domain of semantic segmentation. These approaches utilize diffusion models either for generating annotated data or for extracting features to facilitate semantic segmentation. This typically involves training segmentation models by generating a considerable amount of synthetic data or incorporating additional mask annotations. To this end, we uncover the potential of generative text-to-image conditional diffusion models as highly efficient open-vocabulary semantic segmenters, and introduce a novel training-free approach named DiffSegmenter. Specifically, by feeding an input image and candidate classes into an off-the-shelf pre-trained conditional latent diffusion model, the cross-attention maps produced by the denoising U-Net are directly used as segmentation scores, which are further refined and completed by the followed self-attention maps. Additionally, we carefully design effective textual prompts and a category filtering mechanism to further enhance the segmentation results. Extensive experiments on three benchmark datasets show that the proposed DiffSegmenter achieves impressive results for open-vocabulary semantic segmentation.
Technical Report: A Contact-aware Feedback CPG System for Learning-based Locomotion Control in a Soft Snake Robot
Abstract
Integrating contact-awareness into a soft snake robot and efficiently controlling its locomotion in response to contact information present significant challenges. This paper aims to solve contact-aware locomotion problem of a soft snake robot through developing bio-inspired contact-aware locomotion controllers. To provide effective contact information for the controllers, we develop a scale covered sensor structure mimicking natural snakes' \textit{scale sensilla}. In the design of control framework, our core contribution is the development of a novel sensory feedback mechanism of the Matsuoka central pattern generator (CPG) network. This mechanism allows the Matsuoka CPG system to work like a "spine cord" in the whole contact-aware control scheme, which simultaneously takes the stimuli including tonic input signals from the "brain" (a goal-tracking locomotion controller) and sensory feedback signals from the "reflex arc" (the contact reactive controller), and generate rhythmic signals to effectively actuate the soft snake robot to slither through densely allocated obstacles. In the design of the "reflex arc", we develop two types of reactive controllers -- 1) a reinforcement learning (RL) sensor regulator that learns to manipulate the sensory feedback inputs of the CPG system, and 2) a local reflexive sensor-CPG network that directly connects sensor readings and the CPG's feedback inputs in a special topology. These two reactive controllers respectively facilitate two different contact-aware locomotion control schemes. The two control schemes are tested and evaluated in the soft snake robot, showing promising performance in the contact-aware locomotion tasks. The experimental results also further verify the benefit of Matsuoka CPG system in bio-inspired robot controller design.
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models
Authors: Liang Li, Qingyuan Li, Bo Zhang, Xiangxiang Chu
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Abstract
As the size of large language models (LLMs) continues to grow, model compression without sacrificing accuracy has become a crucial challenge for deployment. While some quantization methods, such as GPTQ, have made progress in achieving acceptable 4-bit weight-only quantization, attempts at lower bit quantization often result in severe performance degradation. In this paper, we introduce a technique called norm tweaking, which can be used as a plugin in current PTQ methods to achieve high precision while being cost-efficient. Our approach is inspired by the observation that rectifying the quantized activation distribution to match its float counterpart can readily restore accuracy for LLMs. To achieve this, we carefully design a tweaking strategy that includes calibration data generation and channel-wise distance constraint to update the weights of normalization layers for better generalization. We conduct extensive experiments on various datasets using several open-sourced LLMs. Our method demonstrates significant improvements in both weight-only quantization and joint quantization of weights and activations, surpassing existing PTQ methods. On GLM-130B and OPT-66B, our method even achieves the same level of accuracy at 2-bit quantization as their float ones. Our simple and effective approach makes it more practical for real-world applications.
Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks
Authors: Jingyi Li, Guocheng Liao, Lin Chen, Xu Chen
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Deep learning classifiers are crucial in the age of artificial intelligence. The device-edge-based collaborative inference has been widely adopted as an efficient framework for promoting its applications in IoT and 5G/6G networks. However, it suffers from accuracy degradation under non-i.i.d. data distribution and privacy disclosure. For accuracy degradation, direct use of transfer learning and split learning is high cost and privacy issues remain. For privacy disclosure, cryptography-based approaches lead to a huge overhead. Other lightweight methods assume that the ground truth is non-sensitive and can be exposed. But for many applications, the ground truth is the user's crucial privacy-sensitive information. In this paper, we propose a framework of Roulette, which is a task-oriented semantic privacy-preserving collaborative inference framework for deep learning classifiers. More than input data, we treat the ground truth of the data as private information. We develop a novel paradigm of split learning where the back-end DNN is frozen and the front-end DNN is retrained to be both a feature extractor and an encryptor. Moreover, we provide a differential privacy guarantee and analyze the hardness of ground truth inference attacks. To validate the proposed Roulette, we conduct extensive performance evaluations using realistic datasets, which demonstrate that Roulette can effectively defend against various attacks and meanwhile achieve good model accuracy. In a situation where the non-i.i.d. is very severe, Roulette improves the inference accuracy by 21\% averaged over benchmarks, while making the accuracy of discrimination attacks almost equivalent to random guessing.
Geometry and Wideband Performance of a Maximal Ratio Combining Beam
Authors: Andrea Bedin, Andrea Zanella
Subjects: Information Theory (cs.IT); Networking and Internet Architecture (cs.NI)
Abstract
This paper discusses the geometrical features and wideband performance of the beam with maximal ratio combining coefficients for a generic multi-antenna receiver. In particular, in case the channel is a linear combination of plane waves, we show that such a beam can be decomposed in a linear combination of beams pointed in the direction of each plane wave, and we compute how many directions can be effectively utilized. This highlights that such beam is better exploiting the spatial diversity provided by the channel, and therefore it is expected to be more robust to disruptions. Moreover, we compute the achieved Signal-to-Noise-Ratio for a wideband receiver, showing that it is not significantly worse than for other methods. Finally, we provide some insights on the robustness of the method by simulating the impact of the blockage of one multipath components.
Adjacency-hopping de Bruijn Sequences for Non-repetitive Coding
Authors: Bin Chen, Zhenglin Liang, Shiqian Wu
Subjects: Information Theory (cs.IT); Computer Vision and Pattern Recognition (cs.CV); Discrete Mathematics (cs.DM)
Abstract
A special type of cyclic sequences named adjacency-hopping de Bruijn sequences is introduced in this paper. It is theoretically proved the existence of such sequences, and the number of such sequences is derived. These sequences guarantee that all neighboring codes are different while retaining the uniqueness of subsequences, which is a significant characteristic of original de Bruijn sequences in coding and matching. At last, the adjacency-hopping de Bruijn sequences are applied to structured light coding, and a color fringe pattern coded by such a sequence is presented. In summary, the proposed sequences demonstrate significant advantages in structured light coding by virtue of the uniqueness of subsequences and the adjacency-hopping characteristic, and show potential for extension to other fields with similar requirements of non-repetitive coding and efficient matching.
Bandwidth-efficient Inference for Neural Image Compression
Authors: Shanzhi Yin, Tongda Xu, Yongsheng Liang, Yuanyuan Wang, Yanghao Li, Yan Wang, Jingjing Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
Abstract
With neural networks growing deeper and feature maps growing larger, limited communication bandwidth with external memory (or DRAM) and power constraints become a bottleneck in implementing network inference on mobile and edge devices. In this paper, we propose an end-to-end differentiable bandwidth efficient neural inference method with the activation compressed by neural data compression method. Specifically, we propose a transform-quantization-entropy coding pipeline for activation compression with symmetric exponential Golomb coding and a data-dependent Gaussian entropy model for arithmetic coding. Optimized with existing model quantization methods, low-level task of image compression can achieve up to 19x bandwidth reduction with 6.21x energy saving.
New methods for quasi-interpolation approximations: resolution of odd-degree singularities
Authors: Martin Buhmann, Janin Jäger, Joaquín Jódar, Miguel L. Rodríguez
Abstract
In this paper, we study functional approximations where we choose the so-called radial basis function method and more specifically, quasi-interpolation. From the various available approaches to the latter, we form new quasi-Lagrange functions when the orders of the singularities of the radial function's Fourier transforms at zero do not match the parity of the dimension of the space, and therefore new expansions and coefficients are needed to overcome this problem. We develop explicit constructions of infinite Fourier expansions that provide these coefficients and make an extensive comparison of the approximation qualities and - with a particular focus - polynomial precision and uniform approximation order of the various formulae. One of the interesting observations concerns the link between algebraic conditions of expansion coefficients and analytic properties of localness and convergence.
Non-Clashing Teaching Maps for Balls in Graphs
Authors: Jérémie Chalopin, Victor Chepoi, Fionn Mc Inerney, Sébastien Ratel
Subjects: Computational Complexity (cs.CC); Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Combinatorics (math.CO)
Abstract
Recently, Kirkpatrick et al. [ALT 2019] and Fallat et al. [JMLR 2023] introduced non-clashing teaching and showed it to be the most efficient machine teaching model satisfying the benchmark for collusion-avoidance set by Goldman and Mathias. A teaching map $T$ for a concept class $\cal{C}$ assigns a (teaching) set $T(C)$ of examples to each concept $C \in \cal{C}$. A teaching map is non-clashing if no pair of concepts are consistent with the union of their teaching sets. The size of a non-clashing teaching map (NCTM) $T$ is the maximum size of a $T(C)$, $C \in \cal{C}$. The non-clashing teaching dimension NCTD$(\cal{C})$ of $\cal{C}$ is the minimum size of an NCTM for $\cal{C}$. NCTM$^+$ and NCTD$^+(\cal{C})$ are defined analogously, except the teacher may only use positive examples. We study NCTMs and NCTM$^+$s for the concept class $\mathcal{B}(G)$ consisting of all balls of a graph $G$. We show that the associated decision problem {\sc B-NCTD$^+$} for NCTD$^+$ is NP-complete in split, co-bipartite, and bipartite graphs. Surprisingly, we even prove that, unless the ETH fails, {\sc B-NCTD$^+$} does not admit an algorithm running in time $2^{2^{o(vc)}}\cdot n^{O(1)}$, nor a kernelization algorithm outputting a kernel with $2^{o(vc)}$ vertices, where vc is the vertex cover number of $G$. These are extremely rare results: it is only the second (fourth, resp.) problem in NP to admit a double-exponential lower bound parameterized by vc (treewidth, resp.), and only one of very few problems to admit an ETH-based conditional lower bound on the number of vertices in a kernel. We complement these lower bounds with matching upper bounds. For trees, interval graphs, cycles, and trees of cycles, we derive NCTM$^+$s or NCTMs for $\mathcal{B}(G)$ of size proportional to its VC-dimension. For Gromov-hyperbolic graphs, we design an approximate NCTM$^+$ for $\mathcal{B}(G)$ of size 2.
Abstract
Large Language Models (LLMs) have demonstrated remarkable adaptability, showcasing their capacity to excel in tasks for which they were not explicitly trained. However, despite their impressive natural language processing (NLP) capabilities, effective alignment of LLMs remains a crucial challenge when deploying them for specific clinical applications. The ability to generate responses with factually accurate content and to engage in non-trivial reasoning steps are crucial for the LLMs to be eligible for applications in clinical medicine. Employing a combination of techniques including instruction-tuning and in-prompt strategies like few-shot and chain of thought prompting has significantly enhanced the performance of LLMs. Our proposed alignment strategy for medical question-answering, known as 'expand-guess-refine', offers a parameter and data-efficient solution. A preliminary analysis of this method demonstrated outstanding performance, achieving a score of 70.63% on a subset of questions sourced from the USMLE dataset.
A Unified Framework for Discovering Discrete Symmetries
Abstract
We consider the problem of learning a function respecting a symmetry from among a class of symmetries. We develop a unified framework that enables symmetry discovery across a broad range of subgroups including locally symmetric, dihedral and cyclic subgroups. At the core of the framework is a novel architecture composed of linear and tensor-valued functions that expresses functions invariant to these subgroups in a principled manner. The structure of the architecture enables us to leverage multi-armed bandit algorithms and gradient descent to efficiently optimize over the linear and the tensor-valued functions, respectively, and to infer the symmetry that is ultimately learnt. We also discuss the necessity of the tensor-valued functions in the architecture. Experiments on image-digit sum and polynomial regression tasks demonstrate the effectiveness of our approach.
Towards Efficient Training with Negative Samples in Visual Tracking
Authors: Qingmao Wei, Bi Zeng, Guotian Zeng
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Current state-of-the-art (SOTA) methods in visual object tracking often require extensive computational resources and vast amounts of training data, leading to a risk of overfitting. This study introduces a more efficient training strategy to mitigate overfitting and reduce computational requirements. We balance the training process with a mix of negative and positive samples from the outset, named as Joint learning with Negative samples (JN). Negative samples refer to scenarios where the object from the template is not present in the search region, which helps to prevent the model from simply memorizing the target, and instead encourages it to use the template for object location. To handle the negative samples effectively, we adopt a distribution-based head, which modeling the bounding box as distribution of distances to express uncertainty about the target's location in the presence of negative samples, offering an efficient way to manage the mixed sample training. Furthermore, our approach introduces a target-indicating token. It encapsulates the target's precise location within the template image. This method provides exact boundary details with negligible computational cost but improving performance. Our model, JN-256, exhibits superior performance on challenging benchmarks, achieving 75.8% AO on GOT-10k and 84.1% AUC on TrackingNet. Notably, JN-256 outperforms previous SOTA trackers that utilize larger models and higher input resolutions, even though it is trained with only half the number of data sampled used in those works.
DECODE: Data-driven Energy Consumption Prediction leveraging Historical Data and Environmental Factors in Buildings
Authors: Aditya Mishra, Haroon R. Lone, Aayush Mishra
Abstract
Energy prediction in buildings plays a crucial role in effective energy management. Precise predictions are essential for achieving optimal energy consumption and distribution within the grid. This paper introduces a Long Short-Term Memory (LSTM) model designed to forecast building energy consumption using historical energy data, occupancy patterns, and weather conditions. The LSTM model provides accurate short, medium, and long-term energy predictions for residential and commercial buildings compared to existing prediction models. We compare our LSTM model with established prediction methods, including linear regression, decision trees, and random forest. Encouragingly, the proposed LSTM model emerges as the superior performer across all metrics. It demonstrates exceptional prediction accuracy, boasting the highest R2 score of 0.97 and the most favorable mean absolute error (MAE) of 0.007. An additional advantage of our developed model is its capacity to achieve efficient energy consumption forecasts even when trained on a limited dataset. We address concerns about overfitting (variance) and underfitting (bias) through rigorous training and evaluation on real-world data. In summary, our research contributes to energy prediction by offering a robust LSTM model that outperforms alternative methods and operates with remarkable efficiency, generalizability, and reliability.
Evaluation of NR-Sidelink for Cooperative Industrial AGVs
Authors: Shubhangi Bhadauria, Klea Plaku, Yash Deshpande, Wolfgang Kellerer
Subjects: Networking and Internet Architecture (cs.NI); Emerging Technologies (cs.ET)
Abstract
Industry 4.0 has brought to attention the need for a connected, flexible, and autonomous production environment. The New Radio (NR)-sidelink, which was introduced by the third-generation partnership project (3GPP) in Release 16, can be particularly helpful for factories that need to facilitate cooperative and close-range communication. Automated Guided Vehicles (AGVs) are important for material handling and carriage within these environments, and using NR-sidelink communication can further enhance their performance. An efficient resource allocation mechanism is required to ensure reliable communication and avoid interference between AGVs and other wireless systems in the factory using NR-sidelink. This work evaluates the 3GPP standardized resource allocation algorithm for NR-sidelink for a use case of cooperative carrying AGVs. We suggest further improvements that are tailored to the quality of service (QoS) requirements of an indoor factory communication scenario with cooperative AGVs.The use of NR-sidelink communication has the potential to help meet the QoS requirements for different Industry 4.0 use cases. This work can be a foundation for further improvements in NR-sidelink in 3GPP Release 18 and beyond.
M3D-NCA: Robust 3D Segmentation with Built-in Quality Control
Authors: John Kalkhof, Anirban Mukhopadhyay
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Medical image segmentation relies heavily on large-scale deep learning models, such as UNet-based architectures. However, the real-world utility of such models is limited by their high computational requirements, which makes them impractical for resource-constrained environments such as primary care facilities and conflict zones. Furthermore, shifts in the imaging domain can render these models ineffective and even compromise patient safety if such errors go undetected. To address these challenges, we propose M3D-NCA, a novel methodology that leverages Neural Cellular Automata (NCA) segmentation for 3D medical images using n-level patchification. Moreover, we exploit the variance in M3D-NCA to develop a novel quality metric which can automatically detect errors in the segmentation process of NCAs. M3D-NCA outperforms the two magnitudes larger UNet models in hippocampus and prostate segmentation by 2% Dice and can be run on a Raspberry Pi 4 Model B (2GB RAM). This highlights the potential of M3D-NCA as an effective and efficient alternative for medical image segmentation in resource-constrained environments.
Hierarchical-level rain image generative model based on GAN
Abstract
Autonomous vehicles are exposed to various weather during operation, which is likely to trigger the performance limitations of the perception system, leading to the safety of the intended functionality (SOTIF) problems. To efficiently generate data for testing the performance of visual perception algorithms under various weather conditions, a hierarchical-level rain image generative model, rain conditional CycleGAN (RCCycleGAN), is constructed. RCCycleGAN is based on the generative adversarial network (GAN) and can generate images of light, medium, and heavy rain. Different rain intensities are introduced as labels in conditional GAN (CGAN). Meanwhile, the model structure is optimized and the training strategy is adjusted to alleviate the problem of mode collapse. In addition, natural rain images of different intensities are collected and processed for model training and validation. Compared with the two baseline models, CycleGAN and DerainCycleGAN, the peak signal-to-noise ratio (PSNR) of RCCycleGAN on the test dataset is improved by 2.58 dB and 0.74 dB, and the structural similarity (SSIM) is improved by 18% and 8%, respectively. The ablation experiments are also carried out to validate the effectiveness of the model tuning.
Abstract
Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce "Live Charts," a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.
FishMOT: A Simple and Effective Method for Fish Tracking Based on IoU Matching
Abstract
The tracking of various fish species plays a profoundly significant role in understanding the behavior of individual fish and their groups. Present tracking methods suffer from issues of low accuracy or poor robustness. In order to address these concerns, this paper proposes a novel tracking approach, named FishMOT (Fish Multiple Object Tracking). This method combines object detection techniques with the IoU matching algorithm, thereby achieving efficient, precise, and robust fish detection and tracking. Diverging from other approaches, this method eliminates the need for multiple feature extractions and identity assignments for each individual, instead directly utilizing the output results of the detector for tracking, thereby significantly reducing computational time and storage space. Furthermore, this method imposes minimal requirements on factors such as video quality and variations in individual appearance. As long as the detector can accurately locate and identify fish, effective tracking can be achieved. This approach enhances robustness and generalizability. Moreover, the algorithm employed in this method addresses the issue of missed detections without relying on complex feature matching or graph optimization algorithms. This contributes to improved accuracy and reliability. Experimental trials were conducted in the open-source video dataset provided by idtracker.ai, and comparisons were made with state-of-the-art detector-based multi-object tracking methods. Additionally, comparisons were made with idtracker.ai and TRex, two tools that demonstrate exceptional performance in the field of animal tracking. The experimental results demonstrate that the proposed method outperforms other approaches in various evaluation metrics, exhibiting faster speed and lower memory requirements. The source codes and pre-trained models are available at: https://github.com/gakkistar/FishMOT
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models
Authors: Pierre Schumacher, Thomas Geijtenbeek, Vittorio Caggiano, Vikash Kumar, Syn Schmitt, Georg Martius, Daniel F. B. Haeufle
Abstract
Humans excel at robust bipedal walking in complex natural environments. In each step, they adequately tune the interaction of biomechanical muscle dynamics and neuronal signals to be robust against uncertainties in ground conditions. However, it is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem considering stability, robustness, and energy efficiency. In computer simulations, energy minimization has been shown to be a successful optimization target, reproducing natural walking with trajectory optimization or reflex-based control methods. However, these methods focus on particular motions at a time and the resulting controllers are limited when compensating for perturbations. In robotics, reinforcement learning~(RL) methods recently achieved highly stable (and efficient) locomotion on quadruped systems, but the generation of human-like walking with bipedal biomechanical models has required extensive use of expert data sets. This strong reliance on demonstrations often results in brittle policies and limits the application to new behaviors, especially considering the potential variety of movements for high-dimensional musculoskeletal models in 3D. Achieving natural locomotion with RL without sacrificing its incredible robustness might pave the way for a novel approach to studying human walking in complex natural environments.
Sparse 3D Reconstruction via Object-Centric Ray Sampling
Authors: Llukman Cerkezi, Paolo Favaro
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
We propose a novel method for 3D object reconstruction from a sparse set of views captured from a 360-degree calibrated camera rig. We represent the object surface through a hybrid model that uses both an MLP-based neural representation and a triangle mesh. A key contribution in our work is a novel object-centric sampling scheme of the neural representation, where rays are shared among all views. This efficiently concentrates and reduces the number of samples used to update the neural model at each iteration. This sampling scheme relies on the mesh representation to ensure also that samples are well-distributed along its normals. The rendering is then performed efficiently by a differentiable renderer. We demonstrate that this sampling scheme results in a more effective training of the neural representation, does not require the additional supervision of segmentation masks, yields state of the art 3D reconstructions, and works with sparse views on the Google's Scanned Objects, Tank and Temples and MVMC Car datasets.
A Micor-Macro parallel-in-time Implementation for the 2D Navier-Stokes Equations
Authors: Benedict Philippi, Mahfuz Sarker Miraz, Thomas Slawig
Abstract
In this paper the Micro-Macro Parareal algorithm was adapted to PDEs. The parallel-in-time approach requires two meshes of different spatial resolution in order to compute approximations in an iterative way to a predefined reference solution. When fast convergence in few iterations can be accomplished the algorithm is able to generate wall-time reduction in comparison to the serial computation. We chose the laminar flow around a cylinder benchmark on 2-dimensional domain which was simulated with the open-source software OpenFoam. The numerical experiments presented in this work aim to approximate states local in time and space and the diagnostic lift coefficient. The Reynolds number is gradually increased from 100 to 1,000, before the transition to turbulent flows sets in. After the results are presented the convergence behavior is discussed with respect to the Reynolds number and the applied interpolation schemes.
Adaptive Growth: Real-time CNN Layer Expansion
Authors: Yunjie Zhu, Yunhao Chen
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Deep Neural Networks (DNNs) have shown unparalleled achievements in numerous applications, reflecting their proficiency in managing vast data sets. Yet, their static structure limits their adaptability in ever-changing environments. This research presents a new algorithm that allows the convolutional layer of a Convolutional Neural Network (CNN) to dynamically evolve based on data input, while still being seamlessly integrated into existing DNNs. Instead of a rigid architecture, our approach iteratively introduces kernels to the convolutional layer, gauging its real-time response to varying data. This process is refined by evaluating the layer's capacity to discern image features, guiding its growth. Remarkably, our unsupervised method has outstripped its supervised counterparts across diverse datasets like MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100. It also showcases enhanced adaptability in transfer learning scenarios. By introducing a data-driven model scalability strategy, we are filling a void in deep learning, leading to more flexible and efficient DNNs suited for dynamic settings. Code:(https://github.com/YunjieZhu/Extensible-Convolutional-Layer-git-version).
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Authors: Andres Potapczynski, Marc Finzi, Geoff Pleiss, Andrew Gordon Wilson
Abstract
Many areas of machine learning and science involve large linear algebra problems, such as eigendecompositions, solving linear systems, computing matrix exponentials, and trace estimation. The matrices involved often have Kronecker, convolutional, block diagonal, sum, or product structure. In this paper, we propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA (Compositional Linear Algebra). By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms. Moreover, CoLA provides memory efficient automatic differentiation, low precision computation, and GPU acceleration in both JAX and PyTorch, while also accommodating new objects, operations, and rules in downstream packages via multiple dispatch. CoLA can accelerate many algebraic operations, while making it easy to prototype matrix structures and algorithms, providing an appealing drop-in tool for virtually any computational effort that requires linear algebra. We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
Prompt-based All-in-One Image Restoration using CNNs and Transformer
Authors: Hu Gao, Jing Yang, Ning Wang, Jingfan Yang, Ying Zhang, Depeng Dang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Image restoration aims to recover the high-quality images from their degraded observations. Since most existing methods have been dedicated into single degradation removal, they may not yield optimal results on other types of degradations, which do not satisfy the applications in real world scenarios. In this paper, we propose a novel data ingredient-oriented approach that leverages prompt-based learning to enable a single model to efficiently tackle multiple image degradation tasks. Specifically, we utilize a encoder to capture features and introduce prompts with degradation-specific information to guide the decoder in adaptively recovering images affected by various degradations. In order to model the local invariant properties and non-local information for high-quality image restoration, we combined CNNs operations and Transformers. Simultaneously, we made several key designs in the Transformer blocks (multi-head rearranged attention with prompts and simple-gate feed-forward network) to reduce computational requirements and selectively determines what information should be persevered to facilitate efficient recovery of potentially sharp images. Furthermore, we incorporate a feature fusion mechanism further explores the multi-scale information to improve the aggregated features. The resulting tightly interlinked hierarchy architecture, named as CAPTNet, despite being designed to handle different types of degradations, extensive experiments demonstrate that our method performs competitively to the task-specific algorithms.
Establishing Markov Equivalence in Cyclic Directed Graphs
Abstract
We present a new, efficient procedure to establish Markov equivalence between directed graphs that may or may not contain cycles under the \textit{d}-separation criterion. It is based on the Cyclic Equivalence Theorem (CET) in the seminal works on cyclic models by Thomas Richardson in the mid '90s, but now rephrased from an ancestral perspective. The resulting characterization leads to a procedure for establishing Markov equivalence between graphs that no longer requires tests for d-separation, leading to a significantly reduced algorithmic complexity. The conceptually simplified characterization may help to reinvigorate theoretical research towards sound and complete cyclic discovery in the presence of latent confounders. This version includes a correction to rule (iv) in Theorem 1, and the subsequent adjustment in part 2 of Algorithm 2.
Solving multiscale elliptic problems by sparse radial basis function neural networks
Abstract
Machine learning has been successfully applied to various fields of scientific computing in recent years. In this work, we propose a sparse radial basis function neural network method to solve elliptic partial differential equations (PDEs) with multiscale coefficients. Inspired by the deep mixed residual method, we rewrite the second-order problem into a first-order system and employ multiple radial basis function neural networks (RBFNNs) to approximate unknown functions in the system. To aviod the overfitting due to the simplicity of RBFNN, an additional regularization is introduced in the loss function. Thus the loss function contains two parts: the $L_2$ loss for the residual of the first-order system and boundary conditions, and the $\ell_1$ regularization term for the weights of radial basis functions (RBFs). An algorithm for optimizing the specific loss function is introduced to accelerate the training process. The accuracy and effectiveness of the proposed method are demonstrated through a collection of multiscale problems with scale separation, discontinuity and multiple scales from one to three dimensions. Notably, the $\ell_1$ regularization can achieve the goal of representing the solution by fewer RBFs. As a consequence, the total number of RBFs scales like $\mathcal{O}(\varepsilon^{-n\tau})$, where $\varepsilon$ is the smallest scale, $n$ is the dimensionality, and $\tau$ is typically smaller than $1$. It is worth mentioning that the proposed method not only has the numerical convergence and thus provides a reliable numerical solution in three dimensions when a classical method is typically not affordable, but also outperforms most other available machine learning methods in terms of accuracy and robustness.
Serving Time: Real-Time, Safe Motion Planning and Control for Manipulation of Unsecured Objects
Authors: Zachary Brei, Jonathan Michaux, Bohao Zhang, Patrick Holmes, Ram Vasudevan
Subjects: Robotics (cs.RO); Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
A key challenge to ensuring the rapid transition of robotic systems from the industrial sector to more ubiquitous applications is the development of algorithms that can guarantee safe operation while in close proximity to humans. Motion planning and control methods, for instance, must be able to certify safety while operating in real-time in arbitrary environments and in the presence of model uncertainty. This paper proposes Wrench Analysis for Inertial Transport using Reachability (WAITR), a certifiably safe motion planning and control framework for serial link manipulators that manipulate unsecured objects in arbitrary environments. WAITR uses reachability analysis to construct over-approximations of the contact wrench applied to unsecured objects, which captures uncertainty in the manipulator dynamics, the object dynamics, and contact parameters such as the coefficient of friction. An optimization problem formulation is presented that can be solved in real-time to generate provably-safe motions for manipulating the unsecured objects. This paper illustrates that WAITR outperforms state of the art methods in a variety of simulation experiments and demonstrates its performance in the real-world.
Optimal transmission switching and grid reconfiguration for transmission systems via convex relaxations
Authors: Vineet Jagadeesan Nair
Subjects: Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
In this paper, we formulate optimization problems to perform optimal transmission switching (OTS) in order to operate power transmission grids most efficiently. In any given electrical network, several of the transmission lines are generally equipped with switches, circuit breakers, and/or reclosers. The conventional practice is to operate the grid using a static or fixed configuration. However, it may be beneficial to dynamically reconfigure the grid through switching actions in order to respond to real-time demand and supply conditions. This has the potential to help reduce costs and improve efficiency. Furthermore, such OTS may be more crucial in future power grids with much higher penetrations of renewable energy sources, which introduce more variability and intermittency in generation. Similarly, OTS can potentially help mitigate the effects of unpredictable demand fluctuations (e.g. due to extreme weather). We explored and compared several different formulations for the OTS problems in terms of computational performance and optimality. I also applied them to small transmission test case networks as a proof of concept to see what the effects of applying OTS are.
Learning to Recharge: UAV Coverage Path Planning through Deep Reinforcement Learning
Authors: Mirco Theile, Harald Bayerlein, Marco Caccamo, Alberto L. Sangiovanni-Vincentelli
Abstract
Coverage path planning (CPP) is a critical problem in robotics, where the goal is to find an efficient path that covers every point in an area of interest. This work addresses the power-constrained CPP problem with recharge for battery-limited unmanned aerial vehicles (UAVs). In this problem, a notable challenge emerges from integrating recharge journeys into the overall coverage strategy, highlighting the intricate task of making strategic, long-term decisions. We propose a novel proximal policy optimization (PPO)-based deep reinforcement learning (DRL) approach with map-based observations, utilizing action masking and discount factor scheduling to optimize coverage trajectories over the entire mission horizon. We further provide the agent with a position history to handle emergent state loops caused by the recharge capability. Our approach outperforms a baseline heuristic, generalizes to different target zones and maps, with limited generalization to unseen maps. We offer valuable insights into DRL algorithm design for long-horizon problems and provide a publicly available software framework for the CPP problem.
UMS: Live Migration of Containerized Services across Autonomous Computing Systems
Abstract
Containerized services deployed within various computing systems, such as edge and cloud, desire live migration support to enable user mobility, elasticity, and load balancing. To enable such a ubiquitous and efficient service migration, a live migration solution needs to handle circumstances where users have various authority levels (full control, limited control, or no control) over the underlying computing systems. Supporting the live migration at these levels serves as the cornerstone of interoperability, and can unlock several use cases across various forms of distributed systems. As such, in this study, we develop a ubiquitous migration solution (called UMS) that, for a given containerized service, can automatically identify the feasible migration approach, and then seamlessly perform the migration across autonomous computing systems. UMS does not interfere with the way the orchestrator handles containers and can coordinate the migration without the orchestrator involvement. Moreover, UMS is orchestrator-agnostic, i.e., it can be plugged into any underlying orchestrator platform. UMS is equipped with novel methods that can coordinate and perform the live migration at the orchestrator, container, and service levels. Experimental results show that for single-process containers, the service-level approach, and for multi-process containers with small (< 128 MiB) memory footprint, the container-level migration approach lead to the lowest migration overhead and service downtime. To demonstrate the potential of UMS in realizing interoperability and multi-cloud scenarios, we examined it to perform live service migration across heterogeneous orchestrators, and between Microsoft Azure and Google Cloud
Error analysis for local coarsening in univariate spline spaces
Authors: Silvano Figueroa, Eduardo M. Garau, Pedro Morin
Abstract
In this article we analyze the error produced by the removal of an arbitrary knot from a spline function. When a knot has multiplicity greater than one, this implies a reduction of its multiplicity by one unit. In particular, we deduce a very simple formula to compute the error in terms of some neighboring knots and a few control points of the considered spline. Furthermore, we show precisely how this error is related to the jump of a derivative of the spline at the knot. We then use the developed theory to propose efficient and very low-cost local error indicators and adaptive coarsening algorithms. Finally, we present some numerical experiments to illustrate their performance and show some applications.
Keyword: faster
Experience and Prediction: A Metric of Hardness for a Novel Litmus Test
Abstract
In the last decade, the Winograd Schema Challenge (WSC) has become a central aspect of the research community as a novel litmus test. Consequently, the WSC has spurred research interest because it can be seen as the means to understand human behavior. In this regard, the development of new techniques has made possible the usage of Winograd schemas in various fields, such as the design of novel forms of CAPTCHAs. Work from the literature that established a baseline for human adult performance on the WSC has shown that not all schemas are the same, meaning that they could potentially be categorized according to their perceived hardness for humans. In this regard, this \textit{hardness-metric} could be used in future challenges or in the WSC CAPTCHA service to differentiate between Winograd schemas. Recent work of ours has shown that this could be achieved via the design of an automated system that is able to output the hardness-indexes of Winograd schemas, albeit with limitations regarding the number of schemas it could be applied on. This paper adds to previous research by presenting a new system that is based on Machine Learning (ML), able to output the hardness of any Winograd schema faster and more accurately than any other previously used method. Our developed system, which works within two different approaches, namely the random forest and deep learning (LSTM-based), is ready to be used as an extension of any other system that aims to differentiate between Winograd schemas, according to their perceived hardness for humans. At the same time, along with our developed system we extend previous work by presenting the results of a large-scale experiment that shows how human performance varies across Winograd schemas.
DAMM: Directionality-Aware Mixture Model Parallel Sampling for Efficient Dynamical System Learning
Abstract
The Linear Parameter Varying Dynamical System (LPV-DS) is a promising framework for learning stable time-invariant motion policies in robot control. By employing statistical modeling and semi-definite optimization, LPV-DS encodes complex motions via non-linear DS, ensuring the robustness and stability of the system. However, the current LPV-DS scheme faces challenges in accurately interpreting trajectory data while maintaining model efficiency and computational efficiency. To address these limitations, we propose the Directionality-aware Mixture Model (DAMM), a new statistical model that leverages Riemannian metric on $d$-dimensional sphere $\mathbb{S}^d$, and efficiently incorporates non-Euclidean directional information with position. Additionally, we introduce a hybrid Markov chain Monte Carlo method that combines the Gibbs Sampling and the Split/Merge Proposal, facilitating parallel computation and enabling faster inference for near real-time learning performance. Through extensive empirical validation, we demonstrate that the improved LPV-DS framework with DAMM is capable of producing physically-meaningful representations of the trajectory data and improved performance of the generated DS while showcasing significantly enhanced learning speed compared to its previous iterations.
Improved Outlier Robust Seeding for k-means
Authors: Amit Deshpande, Rameshwar Pratap
Subjects: Machine Learning (cs.LG); Computational Geometry (cs.CG); Data Structures and Algorithms (cs.DS)
Abstract
The $k$-means is a popular clustering objective, although it is inherently non-robust and sensitive to outliers. Its popular seeding or initialization called $k$-means++ uses $D^{2}$ sampling and comes with a provable $O(\log k)$ approximation guarantee \cite{AV2007}. However, in the presence of adversarial noise or outliers, $D^{2}$ sampling is more likely to pick centers from distant outliers instead of inlier clusters, and therefore its approximation guarantees \textit{w.r.t.} $k$-means solution on inliers, does not hold. Assuming that the outliers constitute a constant fraction of the given data, we propose a simple variant in the $D^2$ sampling distribution, which makes it robust to the outliers. Our algorithm runs in $O(ndk)$ time, outputs $O(k)$ clusters, discards marginally more points than the optimal number of outliers, and comes with a provable $O(1)$ approximation guarantee. Our algorithm can also be modified to output exactly $k$ clusters instead of $O(k)$ clusters, while keeping its running time linear in $n$ and $d$. This is an improvement over previous results for robust $k$-means based on LP relaxation and rounding \cite{Charikar}, \cite{KrishnaswamyLS18} and \textit{robust $k$-means++} \cite{DeshpandeKP20}. Our empirical results show the advantage of our algorithm over $k$-means++~\cite{AV2007}, uniform random seeding, greedy sampling for $k$ means~\cite{tkmeanspp}, and robust $k$-means++~\cite{DeshpandeKP20}, on standard real-world and synthetic data sets used in previous work. Our proposal is easily amenable to scalable, faster, parallel implementations of $k$-means++ \cite{Bahmani,BachemL017} and is of independent interest for coreset constructions in the presence of outliers \cite{feldman2007ptas,langberg2010universal,feldman2011unified}.
Combining Thermodynamics-based Model of the Centrifugal Compressors and Active Machine Learning for Enhanced Industrial Design Optimization
Authors: Shadi Ghiasi, Guido Pazzi, Concettina Del Grosso, Giovanni De Magistris, Giacomo Veneri
Abstract
The design process of centrifugal compressors requires applying an optimization process which is computationally expensive due to complex analytical equations underlying the compressor's dynamical equations. Although the regression surrogate models could drastically reduce the computational cost of such a process, the major challenge is the scarcity of data for training the surrogate model. Aiming to strategically exploit the labeled samples, we propose the Active-CompDesign framework in which we combine a thermodynamics-based compressor model (i.e., our internal software for compressor design) and Gaussian Process-based surrogate model within a deployable Active Learning (AL) setting. We first conduct experiments in an offline setting and further, extend it to an online AL framework where a real-time interaction with the thermodynamics-based compressor's model allows the deployment in production. ActiveCompDesign shows a significant performance improvement in surrogate modeling by leveraging on uncertainty-based query function of samples within the AL framework with respect to the random selection of data points. Moreover, our framework in production has reduced the total computational time of compressor's design optimization to around 46% faster than relying on the internal thermodynamics-based simulator, achieving the same performance.
BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network
Abstract
Generative adversarial network (GAN)-based vocoders have been intensively studied because they can synthesize high-fidelity audio waveforms faster than real-time. However, it has been reported that most GANs fail to obtain the optimal projection for discriminating between real and fake data in the feature space. In the literature, it has been demonstrated that slicing adversarial network (SAN), an improved GAN training framework that can find the optimal projection, is effective in the image generation task. In this paper, we investigate the effectiveness of SAN in the vocoding task. For this purpose, we propose a scheme to modify least-squares GAN, which most GAN-based vocoders adopt, so that their loss functions satisfy the requirements of SAN. Through our experiments, we demonstrate that SAN can improve the performance of GAN-based vocoders, including BigVGAN, with small modifications. Our code is available at https://github.com/sony/bigvsan.
EdgeFL: A Lightweight Decentralized Federated Learning Framework
Authors: Hongyi Zhang, Jan Bosch, Helena Holmström Olsson
Abstract
Federated Learning (FL) has emerged as a promising approach for collaborative machine learning, addressing data privacy concerns. However, existing FL platforms and frameworks often present challenges for software engineers in terms of complexity, limited customization options, and scalability limitations. In this paper, we introduce EdgeFL, an edge-only lightweight decentralized FL framework, designed to overcome the limitations of centralized aggregation and scalability in FL deployments. By adopting an edge-only model training and aggregation approach, EdgeFL eliminates the need for a central server, enabling seamless scalability across diverse use cases. With a straightforward integration process requiring just four lines of code (LOC), software engineers can easily incorporate FL functionalities into their AI products. Furthermore, EdgeFL offers the flexibility to customize aggregation functions, empowering engineers to adapt them to specific needs. Based on the results, we demonstrate that EdgeFL achieves superior performance compared to existing FL platforms/frameworks. Our results show that EdgeFL reduces weights update latency and enables faster model evolution, enhancing the efficiency of edge devices. Moreover, EdgeFL exhibits improved classification accuracy compared to traditional centralized FL approaches. By leveraging EdgeFL, software engineers can harness the benefits of federated learning while overcoming the challenges associated with existing FL platforms/frameworks.
FishMOT: A Simple and Effective Method for Fish Tracking Based on IoU Matching
Abstract
The tracking of various fish species plays a profoundly significant role in understanding the behavior of individual fish and their groups. Present tracking methods suffer from issues of low accuracy or poor robustness. In order to address these concerns, this paper proposes a novel tracking approach, named FishMOT (Fish Multiple Object Tracking). This method combines object detection techniques with the IoU matching algorithm, thereby achieving efficient, precise, and robust fish detection and tracking. Diverging from other approaches, this method eliminates the need for multiple feature extractions and identity assignments for each individual, instead directly utilizing the output results of the detector for tracking, thereby significantly reducing computational time and storage space. Furthermore, this method imposes minimal requirements on factors such as video quality and variations in individual appearance. As long as the detector can accurately locate and identify fish, effective tracking can be achieved. This approach enhances robustness and generalizability. Moreover, the algorithm employed in this method addresses the issue of missed detections without relying on complex feature matching or graph optimization algorithms. This contributes to improved accuracy and reliability. Experimental trials were conducted in the open-source video dataset provided by idtracker.ai, and comparisons were made with state-of-the-art detector-based multi-object tracking methods. Additionally, comparisons were made with idtracker.ai and TRex, two tools that demonstrate exceptional performance in the field of animal tracking. The experimental results demonstrate that the proposed method outperforms other approaches in various evaluation metrics, exhibiting faster speed and lower memory requirements. The source codes and pre-trained models are available at: https://github.com/gakkistar/FishMOT
Vote2Cap-DETR++: Decoupling Localization and Describing for End-to-End 3D Dense Captioning
Authors: Sijin Chen, Hongyuan Zhu, Mingsheng Li, Xin Chen, Peng Guo, Yinjie Lei, Gang Yu, Taihao Li, Tao Chen
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
3D dense captioning requires a model to translate its understanding of an input 3D scene into several captions associated with different object regions. Existing methods adopt a sophisticated "detect-then-describe" pipeline, which builds explicit relation modules upon a 3D detector with numerous hand-crafted components. While these methods have achieved initial success, the cascade pipeline tends to accumulate errors because of duplicated and inaccurate box estimations and messy 3D scenes. In this paper, we first propose Vote2Cap-DETR, a simple-yet-effective transformer framework that decouples the decoding process of caption generation and object localization through parallel decoding. Moreover, we argue that object localization and description generation require different levels of scene understanding, which could be challenging for a shared set of queries to capture. To this end, we propose an advanced version, Vote2Cap-DETR++, which decouples the queries into localization and caption queries to capture task-specific features. Additionally, we introduce the iterative spatial refinement strategy to vote queries for faster convergence and better localization performance. We also insert additional spatial information to the caption head for more accurate descriptions. Without bells and whistles, extensive experiments on two commonly used datasets, ScanRefer and Nr3D, demonstrate Vote2Cap-DETR and Vote2Cap-DETR++ surpass conventional "detect-then-describe" methods by a large margin. Codes will be made available at https://github.com/ch3cook-fdu/Vote2Cap-DETR.
Pure Monte Carlo Counterfactual Regret Minimization
Authors: Ju Qi, Ting Feng, Falun Hei, Zhemei Fang, Yunfeng Luo
Subjects: Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT); Machine Learning (cs.LG)
Abstract
Counterfactual Regret Minimization (CFR) and its variants are the best algorithms so far for solving large-scale incomplete information games. Building upon CFR, this paper proposes a new algorithm named Pure CFR (PCFR) for achieving better performance. PCFR can be seen as a combination of CFR and Fictitious Play (FP), inheriting the concept of counterfactual regret (value) from CFR, and using the best response strategy instead of the regret matching strategy for the next iteration. Our theoretical proof that PCFR can achieve Blackwell approachability enables PCFR's ability to combine with any CFR variant including Monte Carlo CFR (MCCFR). The resultant Pure MCCFR (PMCCFR) can significantly reduce time and space complexity. Particularly, the convergence speed of PMCCFR is at least three times more than that of MCCFR. In addition, since PMCCFR does not pass through the path of strictly dominated strategies, we developed a new warm-start algorithm inspired by the strictly dominated strategies elimination method. Consequently, the PMCCFR with new warm start algorithm can converge by two orders of magnitude faster than the CFR+ algorithm.
MyoDex: A Generalizable Prior for Dexterous Manipulation
Abstract
Human dexterity is a hallmark of motor control. Our hands can rapidly synthesize new behaviors despite the complexity (multi-articular and multi-joints, with 23 joints controlled by more than 40 muscles) of musculoskeletal sensory-motor circuits. In this work, we take inspiration from how human dexterity builds on a diversity of prior experiences, instead of being acquired through a single task. Motivated by this observation, we set out to develop agents that can build upon their previous experience to quickly acquire new (previously unattainable) behaviors. Specifically, our approach leverages multi-task learning to implicitly capture task-agnostic behavioral priors (MyoDex) for human-like dexterity, using a physiologically realistic human hand model - MyoHand. We demonstrate MyoDex's effectiveness in few-shot generalization as well as positive transfer to a large repertoire of unseen dexterous manipulation tasks. Agents leveraging MyoDex can solve approximately 3x more tasks, and 4x faster in comparison to a distillation baseline. While prior work has synthesized single musculoskeletal control behaviors, MyoDex is the first generalizable manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors. We also demonstrate the effectiveness of our paradigms beyond musculoskeletal control towards the acquisition of dexterity in 24 DoF Adroit Hand. Website: https://sites.google.com/view/myodex
3D Object Positioning Using Differentiable Multimodal Learning
Authors: Sean Zanyk-McLean, Krishna Kumar, Paul Navratil
Subjects: Systems and Control (eess.SY); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Robotics (cs.RO)
Abstract
This article describes a multi-modal method using simulated Lidar data via ray tracing and image pixel loss with differentiable rendering to optimize an object's position with respect to an observer or some referential objects in a computer graphics scene. Object position optimization is completed using gradient descent with the loss function being influenced by both modalities. Typical object placement optimization is done using image pixel loss with differentiable rendering only, this work shows the use of a second modality (Lidar) leads to faster convergence. This method of fusing sensor input presents a potential usefulness for autonomous vehicles, as these methods can be used to establish the locations of multiple actors in a scene. This article also presents a method for the simulation of multiple types of data to be used in the training of autonomous vehicles.
Keyword: mobile
Compressing Vision Transformers for Low-Resource Visual Learning
Authors: Eric Youn, Sai Mitheran J, Sanjana Prabhu, Siyuan Chen
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Vision transformer (ViT) and its variants have swept through visual learning leaderboards and offer state-of-the-art accuracy in tasks such as image classification, object detection, and semantic segmentation by attending to different parts of the visual input and capturing long-range spatial dependencies. However, these models are large and computation-heavy. For instance, the recently proposed ViT-B model has 86M parameters making it impractical for deployment on resource-constrained devices. As a result, their deployment on mobile and edge scenarios is limited. In our work, we aim to take a step toward bringing vision transformers to the edge by utilizing popular model compression techniques such as distillation, pruning, and quantization. Our chosen application environment is an unmanned aerial vehicle (UAV) that is battery-powered and memory-constrained, carrying a single-board computer on the scale of an NVIDIA Jetson Nano with 4GB of RAM. On the other hand, the UAV requires high accuracy close to that of state-of-the-art ViTs to ensure safe object avoidance in autonomous navigation, or correct localization of humans in search-and-rescue. Inference latency should also be minimized given the application requirements. Hence, our target is to enable rapid inference of a vision transformer on an NVIDIA Jetson Nano (4GB) with minimal accuracy loss. This allows us to deploy ViTs on resource-constrained devices, opening up new possibilities in surveillance, environmental monitoring, etc. Our implementation is made available at https://github.com/chensy7/efficient-vit.
Vector-Processing for Mobile Devices: Benchmark and Analysis
Authors: Alireza Khadem, Daichi Fujiki, Nishil Talati, Scott Mahlke, Reetuparna Das
Abstract
Vector processing has become commonplace in today's CPU microarchitectures. Vector instructions improve performance and energy which is crucial for resource-constraint mobile devices. The research community currently lacks a comprehensive benchmark suite to study the benefits of vector processing for mobile devices. This paper presents Swan-an extensive vector processing benchmark suite for mobile applications. Swan consists of a diverse set of data-parallel workloads from four commonly used mobile applications: operating system, web browser, audio/video messaging application, and PDF rendering engine. Using Swan benchmark suite, we conduct a detailed analysis of the performance, power, and energy consumption of vectorized workloads, and show that: (a) Vectorized kernels increase the pressure on cache hierarchy due to the higher rate of memory requests. (b) Vector processing is more beneficial for workloads with lower precision operations and higher cache hit rates. (c) Limited Instruction-Level Parallelism and strided memory accesses to multi-dimensional data structures prevent vector processing benefits from scaling with more SIMD functional units and wider registers. (d) Despite lower computation throughput than domain-specific accelerators, such as GPU, vector processing outperforms these accelerators for kernels with lower operation counts. Finally, we show five common computation patterns in mobile data-parallel workloads that dominate the execution time.
Learning Vehicle Dynamics from Cropped Image Patches for Robot Navigation in Unpaved Outdoor Terrains
Abstract
In the realm of autonomous mobile robots, safe navigation through unpaved outdoor environments remains a challenging task. Due to the high-dimensional nature of sensor data, extracting relevant information becomes a complex problem, which hinders adequate perception and path planning. Previous works have shown promising performances in extracting global features from full-sized images. However, they often face challenges in capturing essential local information. In this paper, we propose Crop-LSTM, which iteratively takes cropped image patches around the current robot's position and predicts the future position, orientation, and bumpiness. Our method performs local feature extraction by paying attention to corresponding image patches along the predicted robot trajectory in the 2D image plane. This enables more accurate predictions of the robot's future trajectory. With our wheeled mobile robot platform Raicart, we demonstrated the effectiveness of Crop-LSTM for point-goal navigation in an unpaved outdoor environment. Our method enabled safe and robust navigation using RGBD images in challenging unpaved outdoor terrains. The summary video is available at https://youtu.be/iIGNZ8ignk0.
Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory
Abstract
Split learning is a privacy-preserving distributed learning paradigm in which an ML model (e.g., a neural network) is split into two parts (i.e., an encoder and a decoder). The encoder shares so-called latent representation, rather than raw data, for model training. In mobile-edge computing, network functions (such as traffic forecasting) can be trained via split learning where an encoder resides in a user equipment (UE) and a decoder resides in the edge network. Based on the data processing inequality and the information bottleneck (IB) theory, we present a new framework and training mechanism to enable a dynamic balancing of the transmission resource consumption with the informativeness of the shared latent representations, which directly impacts the predictive performance. The proposed training mechanism offers an encoder-decoder neural network architecture featuring multiple modes of complexity-relevance tradeoffs, enabling tunable performance. The adaptability can accommodate varying real-time network conditions and application requirements, potentially reducing operational expenditure and enhancing network agility. As a proof of concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled throughput prediction problem. We also offer new insights and highlight some challenges related to recurrent neural networks from the perspective of the IB theory. Interestingly, we find a compression phenomenon across the temporal domain of the sequential model, in addition to the compression phase that occurs with the number of training epochs.
Bandwidth-efficient Inference for Neural Image Compression
Authors: Shanzhi Yin, Tongda Xu, Yongsheng Liang, Yuanyuan Wang, Yanghao Li, Yan Wang, Jingjing Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
Abstract
With neural networks growing deeper and feature maps growing larger, limited communication bandwidth with external memory (or DRAM) and power constraints become a bottleneck in implementing network inference on mobile and edge devices. In this paper, we propose an end-to-end differentiable bandwidth efficient neural inference method with the activation compressed by neural data compression method. Specifically, we propose a transform-quantization-entropy coding pipeline for activation compression with symmetric exponential Golomb coding and a data-dependent Gaussian entropy model for arithmetic coding. Optimized with existing model quantization methods, low-level task of image compression can achieve up to 19x bandwidth reduction with 6.21x energy saving.
Autonomous and Collaborative Smart Home Security System (ACSHSS)
Authors: Hassan Jalil Hadi, Khaleeq Un Nisa, Sheetal Harris
Abstract
Firstly, the proposed solution provides remotely accessible integrated IoT resources for the safety and security of the building. By using Sha ort Messaging System (SMS), the age is sent to the user by the Global System for Mobile (GSM) system. An SMS alert is sent to the user in case any sensor detects an abnormality in their operation. Secondly, an authentication mechanism is deployed to enable only authorized users to access resources. Thirdly, in case of a malicious approach in accessing IoT resources, a timely alert should be received by the owner. A Network Intrusion Detection System (NIDS) is deployed to detect and real-time information in case of any suspicious activity while accessing the Internet of Things network.
Resilient source seeking with robot swarms
Authors: Antonio Acuaviva, Jesus Bautista, Weijia Yao, Juan Jimenez, Hector Garcia de Marina
Subjects: Robotics (cs.RO); Systems and Control (eess.SY)
Abstract
We present a solution for locating the source, or maximum, of an unknown scalar field using a swarm of mobile robots. Unlike relying on the traditional gradient information, the swarm determines an ascending direction to approach the source with arbitrary precision. The ascending direction is calculated from measurements of the field strength at the robot locations and their relative positions concerning the centroid. Rather than focusing on individual robots, we focus the analysis on the density of robots per unit area to guarantee a more resilient swarm, i.e., the functionality remains even if individuals go missing or are misplaced during the mission. We reinforce the robustness of the algorithm by providing sufficient conditions for the swarm shape so that the ascending direction is almost parallel to the gradient. The swarm can respond to an unexpected environment by morphing its shape and exploiting the existence of multiple ascending directions. Finally, we validate our approach numerically with hundreds of robots. The fact that a large number of robots always calculate an ascending direction compensates for the loss of individuals and mitigates issues arising from the actuator and sensor noises.
Keyword: pruning
Compressing Vision Transformers for Low-Resource Visual Learning
Authors: Eric Youn, Sai Mitheran J, Sanjana Prabhu, Siyuan Chen
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Vision transformer (ViT) and its variants have swept through visual learning leaderboards and offer state-of-the-art accuracy in tasks such as image classification, object detection, and semantic segmentation by attending to different parts of the visual input and capturing long-range spatial dependencies. However, these models are large and computation-heavy. For instance, the recently proposed ViT-B model has 86M parameters making it impractical for deployment on resource-constrained devices. As a result, their deployment on mobile and edge scenarios is limited. In our work, we aim to take a step toward bringing vision transformers to the edge by utilizing popular model compression techniques such as distillation, pruning, and quantization. Our chosen application environment is an unmanned aerial vehicle (UAV) that is battery-powered and memory-constrained, carrying a single-board computer on the scale of an NVIDIA Jetson Nano with 4GB of RAM. On the other hand, the UAV requires high accuracy close to that of state-of-the-art ViTs to ensure safe object avoidance in autonomous navigation, or correct localization of humans in search-and-rescue. Inference latency should also be minimized given the application requirements. Hence, our target is to enable rapid inference of a vision transformer on an NVIDIA Jetson Nano (4GB) with minimal accuracy loss. This allows us to deploy ViTs on resource-constrained devices, opening up new possibilities in surveillance, environmental monitoring, etc. Our implementation is made available at https://github.com/chensy7/efficient-vit.
Keyword: diffusion
RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model
Authors: Ahmad Sebaq, Mohamed ElHelw
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Satellite imagery generation and super-resolution are pivotal tasks in remote sensing, demanding high-quality, detailed images for accurate analysis and decision-making. In this paper, we propose an innovative and lightweight approach that employs two-stage diffusion models to gradually generate high-resolution Satellite images purely based on text prompts. Our innovative pipeline comprises two interconnected diffusion models: a Low-Resolution Generation Diffusion Model (LR-GDM) that generates low-resolution images from text and a Super-Resolution Diffusion Model (SRDM) conditionally produced. The LR-GDM effectively synthesizes low-resolution by (computing the correlations of the text embedding and the image embedding in a shared latent space), capturing the essential content and layout of the desired scenes. Subsequently, the SRDM takes the generated low-resolution image and its corresponding text prompts and efficiently produces the high-resolution counterparts, infusing fine-grained spatial details and enhancing visual fidelity. Experiments are conducted on the commonly used dataset, Remote Sensing Image Captioning Dataset (RSICD). Our results demonstrate that our approach outperforms existing state-of-the-art (SoTA) models in generating satellite images with realistic geographical features, weather conditions, and land structures while achieving remarkable super-resolution results for increased spatial precision.
Diffusion on the Probability Simplex
Authors: Griffin Floto, Thorsteinn Jonsson, Mihai Nica, Scott Sanner, Eric Zhengyu Zhu
Abstract
Diffusion models learn to reverse the progressive noising of a data distribution to create a generative model. However, the desired continuous nature of the noising process can be at odds with discrete data. To deal with this tension between continuous and discrete objects, we propose a method of performing diffusion on the probability simplex. Using the probability simplex naturally creates an interpretation where points correspond to categorical probability distributions. Our method uses the softmax function applied to an Ornstein-Unlenbeck Process, a well-known stochastic differential equation. We find that our methodology also naturally extends to include diffusion on the unit cube which has applications for bounded image generation.
Diffusion-based Time Series Data Imputation for Microsoft 365
Authors: Fangkai Yang, Wenjie Yin, Lu Wang, Tianci Li, Pu Zhao, Bo Liu, Paul Wang, Bo Qiao, Yudong Liu, Mårten Björkman, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang
Abstract
Reliability is extremely important for large-scale cloud systems like Microsoft 365. Cloud failures such as disk failure, node failure, etc. threaten service reliability, resulting in online service interruptions and economic loss. Existing works focus on predicting cloud failures and proactively taking action before failures happen. However, they suffer from poor data quality like data missing in model training and prediction, which limits the performance. In this paper, we focus on enhancing data quality through data imputation by the proposed Diffusion+, a sample-efficient diffusion model, to impute the missing data efficiently based on the observed data. Our experiments and application practice show that our model contributes to improving the performance of the downstream failure prediction task.
Diffusion-EDFs: Bi-equivariant Denoising Generative Modeling on SE(3) for Visual Robotic Manipulation
Authors: Hyunwoo Ryu, Jiwoo Kim, Junwoo Chang, Hyun Seok Ahn, Joohwan Seo, Taehan Kim, Jongeun Choi, Roberto Horowitz
Abstract
Recent studies have verified that equivariant methods can significantly improve the data efficiency, generalizability, and robustness in robot learning. Meanwhile, denoising diffusion-based generative modeling has recently gained significant attention as a promising approach for robotic manipulation learning from demonstrations with stochastic behaviors. In this paper, we present Diffusion-EDFs, a novel approach that incorporates spatial roto-translation equivariance, i.e., SE(3)-equivariance to diffusion generative modeling. By integrating SE(3)-equivariance into our model architectures, we demonstrate that our proposed method exhibits remarkable data efficiency, requiring only 5 to 10 task demonstrations for effective end-to-end training. Furthermore, our approach showcases superior generalizability compared to previous diffusion-based manipulation methods.
Diffusion Model is Secretly a Training-free Open Vocabulary Semantic Segmenter
Abstract
Recent research has explored the utilization of pre-trained text-image discriminative models, such as CLIP, to tackle the challenges associated with open-vocabulary semantic segmentation. However, it is worth noting that the alignment process based on contrastive learning employed by these models may unintentionally result in the loss of crucial localization information and object completeness, which are essential for achieving accurate semantic segmentation. More recently, there has been an emerging interest in extending the application of diffusion models beyond text-to-image generation tasks, particularly in the domain of semantic segmentation. These approaches utilize diffusion models either for generating annotated data or for extracting features to facilitate semantic segmentation. This typically involves training segmentation models by generating a considerable amount of synthetic data or incorporating additional mask annotations. To this end, we uncover the potential of generative text-to-image conditional diffusion models as highly efficient open-vocabulary semantic segmenters, and introduce a novel training-free approach named DiffSegmenter. Specifically, by feeding an input image and candidate classes into an off-the-shelf pre-trained conditional latent diffusion model, the cross-attention maps produced by the denoising U-Net are directly used as segmentation scores, which are further refined and completed by the followed self-attention maps. Additionally, we carefully design effective textual prompts and a category filtering mechanism to further enhance the segmentation results. Extensive experiments on three benchmark datasets show that the proposed DiffSegmenter achieves impressive results for open-vocabulary semantic segmentation.
Distributed Least Squares Algorithm for Continuous-time Stochastic Systems Under Cooperative Excitation Condition
Authors: Xinghua Zhu, Zhixin Liu
Subjects: Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
In this paper, we study the distributed adaptive estimation problem of continuous-time stochastic dynamic systems over sensor networks where each agent can only communicate with its local neighbors. A distributed least squares (LS) algorithm based on diffusion strategy is proposed such that the sensors can cooperatively estimate the unknown time-invariant parameter vector from continuous-time noisy signals. By using the martingal estimation theory and Ito formula, we provide upper bounds for the estimation error of the proposed distributed LS algorithm, and further obtain the convergence results under a cooperative excitation condition. Compared with the existing results, our results are established without using the boundedness or persistent excitation (PE) conditions of regression signals. We provide simulation examples to show that multiple sensors can cooperatively accomplish the estimation task even if any individual can not.
A Cole-Hopf transformation based fourth-order multiple-relaxation-time lattice Boltzmann model for the coupled Burgers' equations
Authors: Ying Chen, Xi Liu, Zhenhua Chai, Baochang Shi
Abstract
In this work, a Cole-Hopf transformation based fourth-order multiple-relaxation-time lattice Boltzmann (MRT-LB) model for d-dimensional coupled Burgers' equations is developed. We first adopt the Cole-Hopf transformation where an intermediate variable \theta is introduced to eliminate the nonlinear convection terms in the Burgers' equations on the velocity u=(u_1,u_2,...,u_d). In this case, a diffusion equation on the variable \theta can be obtained, and particularly, the velocity u in the coupled Burgers' equations is determined by the variable \theta and its gradient term \nabla\theta. Then we develop a general MRT-LB model with the natural moments for the d-dimensional transformed diffusion equation and present the corresponding macroscopic finite-difference scheme. At the diffusive scaling, the fourth-order modified equation of the developed MRT-LB model is derived through the Maxwell iteration method. With the aid of the free parameters in the MRT-LB model, we find that not only the consistent fourth-order modified equation can be obtained, but also the gradient term $\nabla\theta$ can be calculated locally by the non-equilibrium distribution function with a fourth-order accuracy, this indicates that theoretically, the MRT-LB model for $d$-dimensional coupled Burgers' equations can achieve a fourth-order accuracy in space. Finally, some simulations are conducted to test the MRT-LB model, and the numerical results show that the proposed MRT-LB model has a fourth-order convergence rate, which is consistent with our theoretical analysis.
Fast time-stepping discontinuous Galerkin method for the subdiffusion equation
Abstract
The nonlocality of the fractional operator causes numerical difficulties for long time computation of the time-fractional evolution equations. This paper develops a high-order fast time-stepping discontinuous Galerkin finite element method for the time-fractional diffusion equations, which saves storage and computational time. The optimal error estimate $O(N^{-p-1} + h^{m+1} + \varepsilon N^{r\alpha})$ of the current time-stepping discontinuous Galerkin method is rigorous proved, where $N$ denotes the number of time intervals, $p$ is the degree of polynomial approximation on each time subinterval, $h$ is the maximum space step, $r\ge1$, $m$ is the order of finite element space, and $\varepsilon>0$ can be arbitrarily small. Numerical simulations verify the theoretical analysis.
MCM: Multi-condition Motion Synthesis Framework for Multi-scenario
Abstract
The objective of the multi-condition human motion synthesis task is to incorporate diverse conditional inputs, encompassing various forms like text, music, speech, and more. This endows the task with the capability to adapt across multiple scenarios, ranging from text-to-motion and music-to-dance, among others. While existing research has primarily focused on single conditions, the multi-condition human motion generation remains underexplored. In this paper, we address these challenges by introducing MCM, a novel paradigm for motion synthesis that spans multiple scenarios under diverse conditions. The MCM framework is able to integrate with any DDPM-like diffusion model to accommodate multi-conditional information input while preserving its generative capabilities. Specifically, MCM employs two-branch architecture consisting of a main branch and a control branch. The control branch shares the same structure as the main branch and is initialized with the parameters of the main branch, effectively maintaining the generation ability of the main branch and supporting multi-condition input. We also introduce a Transformer-based diffusion model MWNet (DDPM-like) as our main branch that can capture the spatial complexity and inter-joint correlations in motion sequences through a channel-dimension self-attention module. Quantitative comparisons demonstrate that our approach achieves SoTA results in both text-to-motion and competitive results in music-to-dance tasks, comparable to task-specific methods. Furthermore, the qualitative evaluation shows that MCM not only streamlines the adaptation of methodologies originally designed for text-to-motion tasks to domains like music-to-dance and speech-to-gesture, eliminating the need for extensive network re-configurations but also enables effective multi-condition modal control, realizing "once trained is motion need".
Abstract
Significant strides have been made using large vision-language models, like Stable Diffusion (SD), for a variety of downstream tasks, including image editing, image correspondence, and 3D shape generation. Inspired by these advancements, we explore leveraging these extensive vision-language models for segmenting images at any desired granularity using as few as one annotated sample by proposing SLiMe. SLiMe frames this problem as an optimization task. Specifically, given a single training image and its segmentation mask, we first extract attention maps, including our novel "weighted accumulated self-attention map" from the SD prior. Then, using the extracted attention maps, the text embeddings of Stable Diffusion are optimized such that, each of them, learn about a single segmented region from the training image. These learned embeddings then highlight the segmented region in the attention maps, which in turn can then be used to derive the segmentation map. This enables SLiMe to segment any real-world image during inference with the granularity of the segmented region in the training image, using just one example. Moreover, leveraging additional training data when available, i.e. few-shot, improves the performance of SLiMe. We carried out a knowledge-rich set of experiments examining various design factors and showed that SLiMe outperforms other existing one-shot and few-shot segmentation methods.
My Art My Choice: Adversarial Protection Against Unruly AI
Authors: Anthony Rhodes, Ram Bhagat, Umur Aybars Ciftci, Ilke Demir
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Generative AI is on the rise, enabling everyone to produce realistic content via publicly available interfaces. Especially for guided image generation, diffusion models are changing the creator economy by producing high quality low cost content. In parallel, artists are rising against unruly AI, since their artwork are leveraged, distributed, and dissimulated by large generative models. Our approach, My Art My Choice (MAMC), aims to empower content owners by protecting their copyrighted materials from being utilized by diffusion models in an adversarial fashion. MAMC learns to generate adversarially perturbed "protected" versions of images which can in turn "break" diffusion models. The perturbation amount is decided by the artist to balance distortion vs. protection of the content. MAMC is designed with a simple UNet-based generator, attacking black box diffusion models, combining several losses to create adversarial twins of the original artwork. We experiment on three datasets for various image-to-image tasks, with different user control values. Both protected image and diffusion output results are evaluated in visual, noise, structure, pixel, and generative spaces to validate our claims. We believe that MAMC is a crucial step for preserving ownership information for AI generated content in a flawless, based-on-need, and human-centric way.
Keyword: adaptive
Adaptive Adversarial Training Does Not Increase Recourse Costs
Authors: Ian Hardy, Jayanth Yetukuri, Yang Liu
Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR)
Abstract
Recent work has connected adversarial attack methods and algorithmic recourse methods: both seek minimal changes to an input instance which alter a model's classification decision. It has been shown that traditional adversarial training, which seeks to minimize a classifier's susceptibility to malicious perturbations, increases the cost of generated recourse; with larger adversarial training radii correlating with higher recourse costs. From the perspective of algorithmic recourse, however, the appropriate adversarial training radius has always been unknown. Another recent line of work has motivated adversarial training with adaptive training radii to address the issue of instance-wise variable adversarial vulnerability, showing success in domains with unknown attack radii. This work studies the effects of adaptive adversarial training on algorithmic recourse costs. We establish that the improvements in model robustness induced by adaptive adversarial training show little effect on algorithmic recourse costs, providing a potential avenue for affordable robustness in domains where recoursability is critical.
Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic Extension
Abstract
Symmetry, a fundamental concept to understand our environment, often oversimplifies reality from a mathematical perspective. Humans are a prime example, deviating from perfect symmetry in terms of appearance and cognitive biases (e.g. having a dominant hand). Nevertheless, our brain can easily overcome these imperfections and efficiently adapt to symmetrical tasks. The driving motivation behind this work lies in capturing this ability through reinforcement learning. To this end, we introduce Adaptive Symmetry Learning (ASL) $\unicode{x2013}$ a model-minimization actor-critic extension that addresses incomplete or inexact symmetry descriptions by adapting itself during the learning process. ASL consists of a symmetry fitting component and a modular loss function that enforces a common symmetric relation across all states while adapting to the learned policy. The performance of ASL is compared to existing symmetry-enhanced methods in a case study involving a four-legged ant model for multidirectional locomotion tasks. The results demonstrate that ASL is capable of recovering from large perturbations and generalizing knowledge to hidden symmetric states. It achieves comparable or better performance than alternative methods in most scenarios, making it a valuable approach for leveraging model symmetry while compensating for inherent perturbations.
Improving Code Generation by Dynamic Temperature Sampling
Authors: Yuqi Zhu, Jia Allen Li, Ge Li, YunFei Zhao, Jia Li, Zhi Jin, Hong Mei
Subjects: Software Engineering (cs.SE); Computation and Language (cs.CL)
Abstract
Recently, Large Language Models (LLMs) have shown impressive results in code generation. However, existing decoding strategies are designed for Natural Language (NL) generation, overlooking the differences between NL and programming languages (PL). Due to this oversight, a better decoding strategy for code generation remains an open question. In this paper, we conduct the first systematic study to explore a decoding strategy specialized in code generation. With an analysis of loss distributions of code tokens, we find that code tokens can be divided into two categories: challenging tokens that are difficult to predict and confident tokens that can be easily inferred. Among them, the challenging tokens mainly appear at the beginning of a code block. Inspired by the above findings, we propose a simple yet effective method: Adaptive Temperature (AdapT) sampling, which dynamically adjusts the temperature coefficient when decoding different tokens. We apply a larger temperature when sampling for challenging tokens, allowing LLMs to explore diverse choices. We employ a smaller temperature for confident tokens avoiding the influence of tail randomness noises. We apply AdapT sampling to LLMs with different sizes and conduct evaluations on two popular datasets. Results show that AdapT sampling significantly outperforms state-of-the-art decoding strategy.
Distributed Least Squares Algorithm for Continuous-time Stochastic Systems Under Cooperative Excitation Condition
Authors: Xinghua Zhu, Zhixin Liu
Subjects: Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
In this paper, we study the distributed adaptive estimation problem of continuous-time stochastic dynamic systems over sensor networks where each agent can only communicate with its local neighbors. A distributed least squares (LS) algorithm based on diffusion strategy is proposed such that the sensors can cooperatively estimate the unknown time-invariant parameter vector from continuous-time noisy signals. By using the martingal estimation theory and Ito formula, we provide upper bounds for the estimation error of the proposed distributed LS algorithm, and further obtain the convergence results under a cooperative excitation condition. Compared with the existing results, our results are established without using the boundedness or persistent excitation (PE) conditions of regression signals. We provide simulation examples to show that multiple sensors can cooperatively accomplish the estimation task even if any individual can not.
An SPH formulation for general plate and shell structures with finite deformation and large rotation
Authors: Dong Wu, Chi Zhang, Xiangyu Hu
Subjects: Numerical Analysis (math.NA); Computational Engineering, Finance, and Science (cs.CE)
Abstract
In this paper, we propose a reduced-dimensional smoothed particle hydrodynamics (SPH) formulation for quasi-static and dynamic analyses of plate and shell structures undergoing finite deformation and large rotation. By exploiting Uflyand-Mindlin plate theory, the present surface-particle formulation is able to resolve the thin structures by using only one layer of particles at the mid-surface. To resolve the geometric non-linearity and capture finite deformation and large rotation, two reduced-dimensional linear-reproducing correction matrices are introduced, and weighted non-singularity conversions between the rotation angle and pseudo normal are formulated. A new non-isotropic Kelvin-Voigt damping is proposed especially for the both thin and moderately thick plate and shell structures to increase the numerical stability. In addition, a shear-scaled momentum-conserving hourglass control algorithm with an adaptive limiter is introduced to suppress the mismatches between the particle position and pseudo normal and those estimated with the deformation gradient. A comprehensive set of test problems, for which the analytical or numerical results from literature or those of the volume-particle SPH model are available for quantitative and qualitative comparison, are examined to demonstrate the accuracy and stability of the present method.
Getting too personal(ized): The importance of feature choice in online adaptive algorithms
Authors: ZhaoBin Li, Luna Yee, Nathaniel Sauerberg, Irene Sakson, Joseph Jay Williams, Anna N. Rafferty
Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
Abstract
Digital educational technologies offer the potential to customize students' experiences and learn what works for which students, enhancing the technology as more students interact with it. We consider whether and when attempting to discover how to personalize has a cost, such as if the adaptation to personal information can delay the adoption of policies that benefit all students. We explore these issues in the context of using multi-armed bandit (MAB) algorithms to learn a policy for what version of an educational technology to present to each student, varying the relation between student characteristics and outcomes and also whether the algorithm is aware of these characteristics. Through simulations, we demonstrate that the inclusion of student characteristics for personalization can be beneficial when those characteristics are needed to learn the optimal action. In other scenarios, this inclusion decreases performance of the bandit algorithm. Moreover, including unneeded student characteristics can systematically disadvantage students with less common values for these characteristics. Our simulations do however suggest that real-time personalization will be helpful in particular real-world scenarios, and we illustrate this through case studies using existing experimental results in ASSISTments. Overall, our simulations show that adaptive personalization in educational technologies can be a double-edged sword: real-time adaptation improves student experiences in some contexts, but the slower adaptation and potentially discriminatory results mean that a more personalized model is not always beneficial.
SymED: Adaptive and Online Symbolic Representation of Data on the Edge
Authors: Daniel Hofstätter, Shashikant Ilager, Ivan Lujic, Ivona Brandic
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG)
Abstract
The edge computing paradigm helps handle the Internet of Things (IoT) generated data in proximity to its source. Challenges occur in transferring, storing, and processing this rapidly growing amount of data on resource-constrained edge devices. Symbolic Representation (SR) algorithms are promising solutions to reduce the data size by converting actual raw data into symbols. Also, they allow data analytics (e.g., anomaly detection and trend prediction) directly on symbols, benefiting large classes of edge applications. However, existing SR algorithms are centralized in design and work offline with batch data, which is infeasible for real-time cases. We propose SymED - Symbolic Edge Data representation method, i.e., an online, adaptive, and distributed approach for symbolic representation of data on edge. SymED is based on the Adaptive Brownian Bridge-based Aggregation (ABBA), where we assume low-powered IoT devices do initial data compression (senders) and the more robust edge devices do the symbolic conversion (receivers). We evaluate SymED by measuring compression performance, reconstruction accuracy through Dynamic Time Warping (DTW) distance, and computational latency. The results show that SymED is able to (i) reduce the raw data with an average compression rate of 9.5%; (ii) keep a low reconstruction error of 13.25 in the DTW space; (iii) simultaneously provide real-time adaptability for online streaming IoT data at typical latencies of 42ms per symbol, reducing the overall network traffic.
Prompt-based All-in-One Image Restoration using CNNs and Transformer
Authors: Hu Gao, Jing Yang, Ning Wang, Jingfan Yang, Ying Zhang, Depeng Dang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Image restoration aims to recover the high-quality images from their degraded observations. Since most existing methods have been dedicated into single degradation removal, they may not yield optimal results on other types of degradations, which do not satisfy the applications in real world scenarios. In this paper, we propose a novel data ingredient-oriented approach that leverages prompt-based learning to enable a single model to efficiently tackle multiple image degradation tasks. Specifically, we utilize a encoder to capture features and introduce prompts with degradation-specific information to guide the decoder in adaptively recovering images affected by various degradations. In order to model the local invariant properties and non-local information for high-quality image restoration, we combined CNNs operations and Transformers. Simultaneously, we made several key designs in the Transformer blocks (multi-head rearranged attention with prompts and simple-gate feed-forward network) to reduce computational requirements and selectively determines what information should be persevered to facilitate efficient recovery of potentially sharp images. Furthermore, we incorporate a feature fusion mechanism further explores the multi-scale information to improve the aggregated features. The resulting tightly interlinked hierarchy architecture, named as CAPTNet, despite being designed to handle different types of degradations, extensive experiments demonstrate that our method performs competitively to the task-specific algorithms.
Error analysis for local coarsening in univariate spline spaces
Authors: Silvano Figueroa, Eduardo M. Garau, Pedro Morin
Abstract
In this article we analyze the error produced by the removal of an arbitrary knot from a spline function. When a knot has multiplicity greater than one, this implies a reduction of its multiplicity by one unit. In particular, we deduce a very simple formula to compute the error in terms of some neighboring knots and a few control points of the considered spline. Furthermore, we show precisely how this error is related to the jump of a derivative of the spline at the knot. We then use the developed theory to propose efficient and very low-cost local error indicators and adaptive coarsening algorithms. Finally, we present some numerical experiments to illustrate their performance and show some applications.
Keyword: quantization
Compressing Vision Transformers for Low-Resource Visual Learning
Authors: Eric Youn, Sai Mitheran J, Sanjana Prabhu, Siyuan Chen
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Vision transformer (ViT) and its variants have swept through visual learning leaderboards and offer state-of-the-art accuracy in tasks such as image classification, object detection, and semantic segmentation by attending to different parts of the visual input and capturing long-range spatial dependencies. However, these models are large and computation-heavy. For instance, the recently proposed ViT-B model has 86M parameters making it impractical for deployment on resource-constrained devices. As a result, their deployment on mobile and edge scenarios is limited. In our work, we aim to take a step toward bringing vision transformers to the edge by utilizing popular model compression techniques such as distillation, pruning, and quantization. Our chosen application environment is an unmanned aerial vehicle (UAV) that is battery-powered and memory-constrained, carrying a single-board computer on the scale of an NVIDIA Jetson Nano with 4GB of RAM. On the other hand, the UAV requires high accuracy close to that of state-of-the-art ViTs to ensure safe object avoidance in autonomous navigation, or correct localization of humans in search-and-rescue. Inference latency should also be minimized given the application requirements. Hence, our target is to enable rapid inference of a vision transformer on an NVIDIA Jetson Nano (4GB) with minimal accuracy loss. This allows us to deploy ViTs on resource-constrained devices, opening up new possibilities in surveillance, environmental monitoring, etc. Our implementation is made available at https://github.com/chensy7/efficient-vit.
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models
Authors: Liang Li, Qingyuan Li, Bo Zhang, Xiangxiang Chu
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Abstract
As the size of large language models (LLMs) continues to grow, model compression without sacrificing accuracy has become a crucial challenge for deployment. While some quantization methods, such as GPTQ, have made progress in achieving acceptable 4-bit weight-only quantization, attempts at lower bit quantization often result in severe performance degradation. In this paper, we introduce a technique called norm tweaking, which can be used as a plugin in current PTQ methods to achieve high precision while being cost-efficient. Our approach is inspired by the observation that rectifying the quantized activation distribution to match its float counterpart can readily restore accuracy for LLMs. To achieve this, we carefully design a tweaking strategy that includes calibration data generation and channel-wise distance constraint to update the weights of normalization layers for better generalization. We conduct extensive experiments on various datasets using several open-sourced LLMs. Our method demonstrates significant improvements in both weight-only quantization and joint quantization of weights and activations, surpassing existing PTQ methods. On GLM-130B and OPT-66B, our method even achieves the same level of accuracy at 2-bit quantization as their float ones. Our simple and effective approach makes it more practical for real-world applications.
Bandwidth-efficient Inference for Neural Image Compression
Authors: Shanzhi Yin, Tongda Xu, Yongsheng Liang, Yuanyuan Wang, Yanghao Li, Yan Wang, Jingjing Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
Abstract
With neural networks growing deeper and feature maps growing larger, limited communication bandwidth with external memory (or DRAM) and power constraints become a bottleneck in implementing network inference on mobile and edge devices. In this paper, we propose an end-to-end differentiable bandwidth efficient neural inference method with the activation compressed by neural data compression method. Specifically, we propose a transform-quantization-entropy coding pipeline for activation compression with symmetric exponential Golomb coding and a data-dependent Gaussian entropy model for arithmetic coding. Optimized with existing model quantization methods, low-level task of image compression can achieve up to 19x bandwidth reduction with 6.21x energy saving.
Abstract
Similar to the revolution of open source code sharing, Artificial Intelligence (AI) model sharing is gaining increased popularity. However, the fast adaptation in the industry, lack of awareness, and ability to exploit the models make them significant attack vectors. By embedding malware in neurons, the malware can be delivered covertly, with minor or no impact on the neural network's performance. The covert attack will use the Least Significant Bits (LSB) weight attack since LSB has a minimal effect on the model accuracy, and as a result, the user will not notice it. Since there are endless ways to hide the attacks, we focus on a zero-trust prevention strategy based on AI model attack disarm and reconstruction. We proposed three types of model steganography weight disarm defense mechanisms. The first two are based on random bit substitution noise, and the other on model weight quantization. We demonstrate a 100\% prevention rate while the methods introduce a minimal decrease in model accuracy based on Qint8 and K-LRBP methods, which is an essential factor for improving AI security.
Keyword: efficient
An efficient spectral method for the dynamic behavior of truss structures
RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model
Effective Multi-Graph Neural Networks for Illicit Account Detection on Cryptocurrency Transaction Networks
Towards User Guided Actionable Recourse
Integrated Photonic AI Accelerators under Hardware Security Attacks: Impacts and Countermeasures
Diffusion-based Time Series Data Imputation for Microsoft 365
Causal Structure Recovery of Linear Dynamical Systems: An FFT based Approach
Experience Capture in Shipbuilding through Computer Applications and Neural Networks
Detection of Unknown-Unknowns in Cyber-Physical Systems using Statistical Conformance with Physics Guided Process Models
Screening of Pneumonia and Urinary Tract Infection at Triage using TriNet
Distributed Variational Inference for Online Supervised Learning
DAMM: Directionality-Aware Mixture Model Parallel Sampling for Efficient Dynamical System Learning
Generative Algorithms for Fusion of Physics-Based Wildfire Spread Models with Satellite Data for Initializing Wildfire Forecasts
Compressing Vision Transformers for Low-Resource Visual Learning
Efficient Maximum $k$-Defective Clique Computation with Improved Time Complexity
Joint Beamforming and Power Allocation for RIS Aided Full-Duplex Integrated Sensing and Uplink Communication System
Energy stable and maximum bound principle preserving schemes for the Q-tensor flow of liquid crystals
Episodic Logit-Q Dynamics for Efficient Learning in Stochastic Teams
Efficient Training for Visual Tracking with Deformable Transformer
Stacked Intelligent Metasurfaces for Multiuser Downlink Beamforming in the Wave Domain
Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic Extension
Gesture-Informed Robot Assistance via Foundation Models
MLN-net: A multi-source medical image segmentation method for clustered microcalcifications using multiple layer normalization
Pre- and post-contact policy decomposition for non-prehensile manipulation with zero-shot sim-to-real transfer
Improving Code Generation by Dynamic Temperature Sampling
Diffusion Model is Secretly a Training-free Open Vocabulary Semantic Segmenter
Technical Report: A Contact-aware Feedback CPG System for Learning-based Locomotion Control in a Soft Snake Robot
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models
Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks
Geometry and Wideband Performance of a Maximal Ratio Combining Beam
Adjacency-hopping de Bruijn Sequences for Non-repetitive Coding
Bandwidth-efficient Inference for Neural Image Compression
New methods for quasi-interpolation approximations: resolution of odd-degree singularities
Non-Clashing Teaching Maps for Balls in Graphs
Aligning Large Language Models for Clinical Tasks
A Unified Framework for Discovering Discrete Symmetries
Towards Efficient Training with Negative Samples in Visual Tracking
DECODE: Data-driven Energy Consumption Prediction leveraging Historical Data and Environmental Factors in Buildings
Evaluation of NR-Sidelink for Cooperative Industrial AGVs
M3D-NCA: Robust 3D Segmentation with Built-in Quality Control
Hierarchical-level rain image generative model based on GAN
Reviving Static Charts into Live Charts
FishMOT: A Simple and Effective Method for Fish Tracking Based on IoU Matching
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models
Sparse 3D Reconstruction via Object-Centric Ray Sampling
A Micor-Macro parallel-in-time Implementation for the 2D Navier-Stokes Equations
Adaptive Growth: Real-time CNN Layer Expansion
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Prompt-based All-in-One Image Restoration using CNNs and Transformer
Establishing Markov Equivalence in Cyclic Directed Graphs
Solving multiscale elliptic problems by sparse radial basis function neural networks
Serving Time: Real-Time, Safe Motion Planning and Control for Manipulation of Unsecured Objects
Optimal transmission switching and grid reconfiguration for transmission systems via convex relaxations
Learning to Recharge: UAV Coverage Path Planning through Deep Reinforcement Learning
UMS: Live Migration of Containerized Services across Autonomous Computing Systems
Error analysis for local coarsening in univariate spline spaces
Keyword: faster
Experience and Prediction: A Metric of Hardness for a Novel Litmus Test
DAMM: Directionality-Aware Mixture Model Parallel Sampling for Efficient Dynamical System Learning
Improved Outlier Robust Seeding for k-means
Combining Thermodynamics-based Model of the Centrifugal Compressors and Active Machine Learning for Enhanced Industrial Design Optimization
BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network
EdgeFL: A Lightweight Decentralized Federated Learning Framework
FishMOT: A Simple and Effective Method for Fish Tracking Based on IoU Matching
Vote2Cap-DETR++: Decoupling Localization and Describing for End-to-End 3D Dense Captioning
Pure Monte Carlo Counterfactual Regret Minimization
MyoDex: A Generalizable Prior for Dexterous Manipulation
3D Object Positioning Using Differentiable Multimodal Learning
Keyword: mobile
Compressing Vision Transformers for Low-Resource Visual Learning
Vector-Processing for Mobile Devices: Benchmark and Analysis
Learning Vehicle Dynamics from Cropped Image Patches for Robot Navigation in Unpaved Outdoor Terrains
Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory
Bandwidth-efficient Inference for Neural Image Compression
Autonomous and Collaborative Smart Home Security System (ACSHSS)
Resilient source seeking with robot swarms
Keyword: pruning
Compressing Vision Transformers for Low-Resource Visual Learning
Keyword: diffusion
RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model
Diffusion on the Probability Simplex
Diffusion-based Time Series Data Imputation for Microsoft 365
Diffusion-EDFs: Bi-equivariant Denoising Generative Modeling on SE(3) for Visual Robotic Manipulation
Diffusion Model is Secretly a Training-free Open Vocabulary Semantic Segmenter
Distributed Least Squares Algorithm for Continuous-time Stochastic Systems Under Cooperative Excitation Condition
A Cole-Hopf transformation based fourth-order multiple-relaxation-time lattice Boltzmann model for the coupled Burgers' equations
Fast time-stepping discontinuous Galerkin method for the subdiffusion equation
MCM: Multi-condition Motion Synthesis Framework for Multi-scenario
SLiMe: Segment Like Me
My Art My Choice: Adversarial Protection Against Unruly AI
Keyword: adaptive
Adaptive Adversarial Training Does Not Increase Recourse Costs
Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic Extension
Improving Code Generation by Dynamic Temperature Sampling
Distributed Least Squares Algorithm for Continuous-time Stochastic Systems Under Cooperative Excitation Condition
An SPH formulation for general plate and shell structures with finite deformation and large rotation
Getting too personal(ized): The importance of feature choice in online adaptive algorithms
SymED: Adaptive and Online Symbolic Representation of Data on the Edge
Prompt-based All-in-One Image Restoration using CNNs and Transformer
Error analysis for local coarsening in univariate spline spaces
Keyword: quantization
Compressing Vision Transformers for Low-Resource Visual Learning
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models
Bandwidth-efficient Inference for Neural Image Compression
Disarming Steganography Attacks Inside Neural Network Models