Abstract
The depth separation theory is nowadays widely accepted as an effective explanation for the power of depth, which consists of two parts: i) there exists a function representable by a deep network; ii) such a function cannot be represented by a shallow network whose width is lower than a threshold. However, this theory is established for feedforward networks. Few studies, if not none, considered the depth separation theory in the context of shortcuts which are the most common network types in solving real-world problems. Here, we find that adding intra-layer links can modify the depth separation theory. First, we report that adding intra-layer links can greatly improve a network's representation capability through bound estimation, explicit construction, and functional space analysis. Then, we modify the depth separation theory by showing that a shallow network with intra-layer links does not need to go as wide as before to express some hard functions constructed by a deep network. Such functions include the renowned "sawtooth" functions. Moreover, the saving of width is up to linear. Our results supplement the existing depth separation theory by examining its limit in the shortcut domain. Also, the mechanism we identify can be translated into analyzing the expressivity of popular shortcut networks such as ResNet and DenseNet, \textit{e.g.}, residual connections empower a network to represent a sawtooth function efficiently.
The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation
Authors: Mahshad Shariatnasab, Farhad Shirani, S. Sitharma Iyengar
Subjects: Information Theory (cs.IT); Cryptography and Security (cs.CR); Databases (cs.DB)
Abstract
Dataset obfuscation refers to techniques in which random noise is added to the entries of a given dataset, prior to its public release, to protect against leakage of private information. In this work, dataset obfuscation under two objectives is considered: i) rank-preservation: to preserve the row ordering in the obfuscated dataset induced by a given rank function, and ii) anonymity: to protect user anonymity under fingerprinting attacks. The first objective, rank-preservation, is of interest in applications such as the design of search engines and recommendation systems, feature matching, and social network analysis. Fingerprinting attacks, considered in evaluating the anonymity objective, are privacy attacks where an attacker constructs a fingerprint of a victim based on its observed activities, such as online web activities, and compares this fingerprint with information extracted from a publicly released obfuscated dataset to identify the victim. By evaluating the performance limits of a class of obfuscation mechanisms over asymptotically large datasets, a fundamental trade-off is quantified between rank-preservation and user anonymity. Single-letter obfuscation mechanisms are considered, where each entry in the dataset is perturbed by independent noise, and their fundamental performance limits are characterized by leveraging large deviation techniques. The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff, is characterized in the form of a convex optimization problem which can be solved efficiently. Numerical simulations of various scenarios are provided to verify the theoretical derivations.
Theoretical Analyses of Evolutionary Algorithms on Time-Linkage OneMax with General Weights
Authors: Weijie Zheng, Xin Yao
Subjects: Neural and Evolutionary Computing (cs.NE)
Abstract
Evolutionary computation has shown its superiority in dynamic optimization, but for the (dynamic) time-linkage problems, some theoretical studies have revealed the possible weakness of evolutionary computation. Since the theoretically analyzed time-linkage problem only considers the influence of an extremely strong negative time-linkage effect, it remains unclear whether the weakness also appears in problems with more general time-linkage effects. Besides, understanding in depth the relationship between time-linkage effect and algorithmic features is important to build up our knowledge of what algorithmic features are good at what kinds of problems. In this paper, we analyze the general time-linkage effect and consider the time-linkage OneMax with general weights whose absolute values reflect the strength and whose sign reflects the positive or negative influence. We prove that except for some small and positive time-linkage effects (that is, for weights $0$ and $1$), randomized local search (RLS) and (1+1)EA cannot converge to the global optimum with a positive probability. More precisely, for the negative time-linkage effect (for negative weights), both algorithms cannot efficiently reach the global optimum and the probability of failing to converge to the global optimum is at least $1-o(1)$. For the not so small positive time-linkage effect (positive weights greater than $1$), such a probability is at most $c+o(1)$ where $c$ is a constant strictly less than $1$.
Complexity of Efficient Outcomes in Binary-Action Polymatrix Games with Implications for Coordination Problems
Authors: Argyrios Deligkas, Eduard Eiben, Gregory Gutin, Philip R. Neary, Anders Yeo
Subjects: Computer Science and Game Theory (cs.GT); Computational Complexity (cs.CC); Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS)
Abstract
We investigate the difficulty of finding economically efficient solutions to coordination problems on graphs. Our work focuses on two forms of coordination problem: pure-coordination games and anti-coordination games. We consider three objectives in the context of simple binary-action polymatrix games: (i) maximizing welfare, (ii) maximizing potential, and (iii) finding a welfare-maximizing Nash equilibrium. We introduce an intermediate, new graph-partition problem, termed Maximum Weighted Digraph Partition, which is of independent interest, and we provide a complexity dichotomy for it. This dichotomy, among other results, provides as a corollary a dichotomy for Objective (i) for general binary-action polymatrix games. In addition, it reveals that the complexity of achieving these objectives varies depending on the form of the coordination problem. Specifically, Objectives (i) and (ii) can be efficiently solved in pure-coordination games, but are NP-hard in anti-coordination games. Finally, we show that objective (iii) is NP-hard even for simple non-trivial pure-coordination games.
Active Sensing for Two-Sided Beam Alignment and Reflection Design Using Ping-Pong Pilots
Authors: Tao Jiang, Foad Sohrabi, Wei Yu
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
Beam alignment is an important task for millimeter-wave (mmWave) communication, because constructing aligned narrow beams both at the transmitter (Tx) and the receiver (Rx) is crucial in terms of compensating the significant path loss in very high-frequency bands. However, beam alignment is also a highly nontrivial task because large antenna arrays typically have a limited number of radio-frequency chains, allowing only low-dimensional measurements of the high-dimensional channel. This paper considers a two-sided beam alignment problem based on an alternating ping-pong pilot scheme between Tx and Rx over multiple rounds without explicit feedback. We propose a deep active sensing framework in which two long short-term memory (LSTM) based neural networks are employed to learn the adaptive sensing strategies (i.e., measurement vectors) and to produce the final aligned beamformers at both sides. In the proposed ping-pong protocol, the Tx and the Rx alternately send pilots so that both sides can leverage local observations to sequentially design their respective sensing and data transmission beamformers. The proposed strategy can be extended to scenarios with a reconfigurable intelligent surface (RIS) for designing, in addition, the reflection coefficients at the RIS for both sensing and communications. Numerical experiments demonstrate significant and interpretable performance improvement. The proposed strategy works well even for the challenging multipath channel environments.
Efficient Coded Multi-Party Computation at Edge Networks
Abstract
Multi-party computation (MPC) is promising for designing privacy-preserving machine learning algorithms at edge networks. An emerging approach is coded-MPC (CMPC), which advocates the use of coded computation to improve the performance of MPC in terms of the required number of workers involved in computations. The current approach for designing CMPC algorithms is to merely combine efficient coded computation constructions with MPC. We show that this approach fails short of being efficient; e.g., entangled polynomial codes are not necessarily better than PolyDot codes in MPC setting, while they are always better for coded computation. Motivated by this observation, we propose a new construction; Adaptive Gap Entangled (AGE) polynomial codes for MPC. We show through analysis and simulations that MPC with AGE codes always perform better than existing CMPC algorithms in terms of the required number of workers as well as computation, storage, and communication overhead.
Foundations of Spatial Perception for Robotics: Hierarchical Representations and Real-time Systems
Authors: Nathan Hughes, Yun Chang, Siyi Hu, Rajat Talak, Rumaisa Abdulhai, Jared Strader, Luca Carlone
Abstract
3D spatial perception is the problem of building and maintaining an actionable and persistent representation of the environment in real-time using sensor data and prior knowledge. Despite the fast-paced progress in robot perception, most existing methods either build purely geometric maps (as in traditional SLAM) or flat metric-semantic maps that do not scale to large environments or large dictionaries of semantic labels. The first part of this paper is concerned with representations: we show that scalable representations for spatial perception need to be hierarchical in nature. Hierarchical representations are efficient to store, and lead to layered graphs with small treewidth, which enable provably efficient inference. We then introduce an example of hierarchical representation for indoor environments, namely a 3D scene graph, and discuss its structure and properties. The second part of the paper focuses on algorithms to incrementally construct a 3D scene graph as the robot explores the environment. Our algorithms combine 3D geometry, topology (to cluster the places into rooms), and geometric deep learning (e.g., to classify the type of rooms the robot is moving across). The third part of the paper focuses on algorithms to maintain and correct 3D scene graphs during long-term operation. We propose hierarchical descriptors for loop closure detection and describe how to correct a scene graph in response to loop closures, by solving a 3D scene graph optimization problem. We conclude the paper by combining the proposed perception algorithms into Hydra, a real-time spatial perception system that builds a 3D scene graph from visual-inertial data in real-time. We showcase Hydra's performance in photo-realistic simulations and real data collected by a Clearpath Jackal robots and a Unitree A1 robot. We release an open-source implementation of Hydra at https://github.com/MIT-SPARK/Hydra.
Exploring Zero and Few-shot Techniques for Intent Classification
Abstract
Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe (Liu et al., 2022) on Flan-T5 (Chang et al., 2022) yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions
Local Life: Stay Informed Around You, A Scalable Geoparsing and Geotagging Approach to Serve Local News Worldwide
Abstract
Local news has become increasingly important in the news industry due to its various benefits. It offers local audiences information that helps them participate in their communities and interests. It also serves as a reliable source of factual reporting that can prevent misinformation. Moreover, it can influence national audiences as some local stories may have wider implications for politics, environment or crime. Hence, detecting the exact geolocation and impact scope of local news is crucial for news recommendation systems. There are two fundamental things required in this process, (1) classify whether an article belongs to local news, and (2) identify the geolocation of the article and its scope of influence to recommend it to appropriate users. In this paper, we focus on the second step and propose (1) an efficient approach to determine the location and radius of local news articles, (2) a method to reconcile the user's location with the article's location, and (3) a metric to evaluate the quality of the local news feed. We demonstrate that our technique is scalable and effective in serving hyperlocal news to users worldwide.
Towards Understanding and Improving GFlowNet Training
Authors: Max W. Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, Tommaso Biancalani
Abstract
Generative flow networks (GFlowNets) are a family of algorithms that learn a generative policy to sample discrete objects $x$ with non-negative reward $R(x)$. Learning objectives guarantee the GFlowNet samples $x$ from the target distribution $p^(x) \propto R(x)$ when loss is globally minimized over all states or trajectories, but it is unclear how well they perform with practical limits on training resources. We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution. As flows can be underdetermined given training data, we clarify the importance of learned flows to generalization and matching $p^(x)$ in practice. We investigate how to learn better flows, and propose (i) prioritized replay training of high-reward $x$, (ii) relative edge flow policy parametrization, and (iii) a novel guided trajectory balance objective, and show how it can solve a substructure credit assignment problem. We substantially improve sample efficiency on biochemical design tasks.
Entropy-split multidimensional summation-by-parts discretization of the Euler and Navier-Stokes equations
Abstract
High-order Hadamard-form entropy stable multidimensional summation-by-parts discretizations of the Euler and Navier-Stokes equations are considerably more expensive than the standard divergence-form discretization. In search of a more efficient entropy stable scheme, we extend the entropy-split method for implementation on unstructured grids and investigate its properties. The main ingredients of the scheme are Harten's entropy functions, diagonal-$ \mathsf{E} $ summation-by-parts operators with diagonal norm matrix, and entropy conservative simultaneous approximation terms (SATs). We show that the scheme is high-order accurate and entropy conservative on periodic curvilinear unstructured grids for the Euler equations. An entropy stable matrix-type artificial dissipation operator is constructed, which can be added to the SATs to obtain an entropy stable semi-discretization. Fully-discrete entropy conservation is achieved using a relaxation Runge-Kutta method. Entropy stable viscous SATs, applicable to both the Hadamard-form and entropy-split schemes, are developed for the Navier-Stokes equations. In the absence of heat fluxes, the entropy-split scheme is entropy stable for the Navier-Stokes equations. Local conservation in the vicinity of discontinuities is enforced using an entropy stable hybrid scheme. Several numerical problems involving both smooth and discontinuous solutions are investigated to support the theoretical results. Computational cost comparison studies suggest that the entropy-split scheme offers substantial efficiency benefits relative to Hadamard-form multidimensional SBP-SAT discretizations.
Boosting Value Decomposition via Unit-Wise Attentive State Representation for Cooperative Multi-Agent Reinforcement Learning
Abstract
In cooperative multi-agent reinforcement learning (MARL), the environmental stochasticity and uncertainties will increase exponentially when the number of agents increases, which puts hard pressure on how to come up with a compact latent representation from partial observation for boosting value decomposition. To tackle these issues, we propose a simple yet powerful method that alleviates partial observability and efficiently promotes coordination by introducing the UNit-wise attentive State Representation (UNSR). In UNSR, each agent learns a compact and disentangled unit-wise state representation outputted from transformer blocks, and produces its local action-value function. The proposed UNSR is used to boost the value decomposition with a multi-head attention mechanism for producing efficient credit assignment in the mixing network, providing an efficient reasoning path between the individual value function and joint value function. Experimental results demonstrate that our method achieves superior performance and data efficiency compared to solid baselines on the StarCraft II micromanagement challenge. Additional ablation experiments also help identify the key factors contributing to the performance of UNSR.
Model Predictive Control of Smart Districts Participating in Frequency Regulation Market: A Case Study of Using Heating Network Storage
Authors: Hikaru Hoshino, T. John Koo, Yun-Chung Chu, Yoshihiko Susuki
Abstract
Flexibility provided by Combined Heat and Power (CHP) units in district heating networks is an important means to cope with increasing penetration of intermittent renewable energy resources, and various methods have been proposed to exploit thermal storage tanks installed in these networks. This paper studies a novel problem motivated by an example of district heating and cooling networks in Japan, where high-temperature steam is used as the heating medium. In steam-based networks, storage tanks are usually absent, and there is a strong need to utilize thermal inertia of the pipeline network as storage. However, this type of use of a heating network directly affects the operating condition of the network, and assuring safety and supply quality at the use side is an open problem. To address this, we formulate a novel control problem to utilize CHP units in frequency regulation market while satisfying physical constraints on a steam network described by a nonlinear model capturing dynamics of heat flows and heat accumulation in the network. Furthermore, a Model Predictive Control (MPC) framework is proposed to solve this problem. By consistently combining several nonlinear control techniques, a computationally efficient MPC controller is obtained and shown to work in real-time.
Rethinking k-means from manifold learning perspective
Abstract
Although numerous clustering algorithms have been developed, many existing methods still leverage k-means technique to detect clusters of data points. However, the performance of k-means heavily depends on the estimation of centers of clusters, which is very difficult to achieve an optimal solution. Another major drawback is that it is sensitive to noise and outlier data. In this paper, from manifold learning perspective, we rethink k-means and present a new clustering algorithm which directly detects clusters of data without mean estimation. Specifically, we construct distance matrix between data points by Butterworth filter such that distance between any two data points in the same clusters equals to a small constant, while increasing the distance between other data pairs from different clusters. To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization on the 3rd-order tensor which consists of indicator matrices of different views. Finally, an efficient alternating algorithm is derived to optimize our model. The constructed sequence was proved to converge to the stationary KKT point. Extensive experimental results indicate the superiority of our proposed method.
Abstract
The evaluation of material networks is a relatively resource-intensive process in the rendering pipeline. Modern production scenes can contain hundreds or thousands of complex materials with massive networks, so there is a great demand for an efficient way of handling material networks. In this paper, we introduce an efficient method for progressively caching the material nodes without an overhead on the rendering performance. We evaluate the material networks as usual in the rendering process. Then, the output value of part of the network is stored in a cache and can be used in the evaluation of the next materials. Using our method, we can render the scene with performance equal to or better than that of the method without caching, with a slight difference in the images rendered with caching and without it.
Parameterized Verification of Disjunctive Timed Networks
Authors: Étienne André, Paul Eichler, Swen Jacobs, Shyam Lal Karra
Subjects: Logic in Computer Science (cs.LO); Formal Languages and Automata Theory (cs.FL)
Abstract
We introduce new techniques for the parameterized verification of disjunctive timed networks (DTNs), i.e., networks of timed automata (TAs) that communicate via location guards that enable a transition only if at least one process is in a given location. This computational model has been considered in the literature before, and example applications are gossiping clock synchronization protocols or planning problems. We address the minimum-time reachability problem (minreach) in DTNs, and show how to efficiently solve it based on a novel zone-graph algorithm. We further show that solving minreach allows us to construct a summary TA capturing exactly the possible behaviors of a single TA within a DTN of arbitrary size. The combination of these two results enables the parameterized verification of DTNs, while avoiding the construction of an exponential-size cutoff-system required by existing results. Our techniques are also implemented, and experiments show their practicality.
An Object SLAM Framework for Association, Mapping, and High-Level Tasks
Abstract
Object SLAM is considered increasingly significant for robot high-level perception and decision-making. Existing studies fall short in terms of data association, object representation, and semantic mapping and frequently rely on additional assumptions, limiting their performance. In this paper, we present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks. First, we propose an ensemble data association approach for associating objects in complicated conditions by incorporating parametric and nonparametric statistic testing. In addition, we suggest an outlier-robust centroid and scale estimation algorithm for modeling objects based on the iForest and line alignment. Then a lightweight and object-oriented map is represented by estimated general object models. Taking into consideration the semantic invariance of objects, we convert the object map to a topological map to provide semantic descriptors to enable multi-map matching. Finally, we suggest an object-driven active exploration strategy to achieve autonomous mapping in the grasping scenario. A range of public datasets and real-world results in mapping, augmented reality, scene matching, relocalization, and robotic manipulation have been used to evaluate the proposed object SLAM framework for its efficient performance.
Multi-Relational Hyperbolic Word Embeddings from Natural Language Definitions
Authors: Marco Valentino, Danilo S. Carvalho, André Freitas
Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG)
Abstract
Neural-based word embeddings using solely distributional information have consistently produced useful meaning representations for downstream tasks. However, existing approaches often result in representations that are hard to interpret and control. Natural language definitions, on the other side, possess a recursive, self-explanatory semantic structure that can support novel representation learning paradigms able to preserve explicit conceptual relations and constraints in the vector space. This paper proposes a neuro-symbolic, multi-relational framework to learn word embeddings exclusively from natural language definitions by jointly mapping defined and defining terms along with their corresponding semantic relations. By automatically extracting the relations from definitions corpora and formalising the learning problem via a translational objective, we specialise the framework in hyperbolic space to capture the hierarchical and multi-resolution structure induced by the definitions. An extensive empirical analysis demonstrates that the framework can help impose the desired structural constraints while preserving the mapping required for controllable and interpretable semantic navigation. Moreover, the experiments reveal the superiority of the hyperbolic word embeddings over the euclidean counterparts and demonstrate that the multi-relational framework can obtain competitive results when compared to state-of-the-art neural approaches (including Transformers), with the advantage of being significantly more efficient and intrinsically interpretable.
Efficient Search of Comprehensively Robust Neural Architectures via Multi-fidelity Evaluation
Abstract
Neural architecture search (NAS) has emerged as one successful technique to find robust deep neural network (DNN) architectures. However, most existing robustness evaluations in NAS only consider $l{\infty}$ norm-based adversarial noises. In order to improve the robustness of DNN models against multiple types of noises, it is necessary to consider a comprehensive evaluation in NAS for robust architectures. But with the increasing number of types of robustness evaluations, it also becomes more time-consuming to find comprehensively robust architectures. To alleviate this problem, we propose a novel efficient search of comprehensively robust neural architectures via multi-fidelity evaluation (ES-CRNA-ME). Specifically, we first search for comprehensively robust architectures under multiple types of evaluations using the weight-sharing-based NAS method, including different $l{p}$ norm attacks, semantic adversarial attacks, and composite adversarial attacks. In addition, we reduce the number of robustness evaluations by the correlation analysis, which can incorporate similar evaluations and decrease the evaluation cost. Finally, we propose a multi-fidelity online surrogate during optimization to further decrease the search cost. On the basis of the surrogate constructed by low-fidelity data, the online high-fidelity data is utilized to finetune the surrogate. Experiments on CIFAR10 and CIFAR100 datasets show the effectiveness of our proposed method.
Adaptive and Flexible Model-Based AI for Deep Receivers in Dynamic Channels
Authors: Tomer Raviv, Sangwoo Park, Osvaldo Simeone, Yonina C. Eldar, Nir Shlezinger
Abstract
Artificial intelligence (AI) is envisioned to play a key role in future wireless technologies, with deep neural networks (DNNs) enabling digital receivers to learn to operate in challenging communication scenarios. However, wireless receiver design poses unique challenges that fundamentally differ from those encountered in traditional deep learning domains. The main challenges arise from the limited power and computational resources of wireless devices, as well as from the dynamic nature of wireless communications, which causes continual changes to the data distribution. These challenges impair conventional AI based on highly-parameterized DNNs, motivating the development of adaptive, flexible, and light-weight AI for wireless communications, which is the focus of this article. Here, we propose that AI-based design of wireless receivers requires rethinking of the three main pillars of AI: architecture, data, and training algorithms. In terms of architecture, we review how to design compact DNNs via model-based deep learning. Then, we discuss how to acquire training data for deep receivers without compromising spectral efficiency. Finally, we review efficient, reliable, and robust training algorithms via meta-learning and generalized Bayesian learning. Numerical results are presented to demonstrate the complementary effectiveness of each of the surveyed methods. We conclude by presenting opportunities for future research on the development of practical deep receivers
Multi-Wavelength Transponders for High-capacity Optical Networks: A Physical-layer-aware Network Planning Study
Authors: Jasper Müller, Ognjen Jovanovic, Tobias Fehenberger, Gabriele Di Rosa, Jörg-Peter Elbers, Carmen Mas-Machuca
Subjects: Networking and Internet Architecture (cs.NI)
Abstract
Continued cost- and power-efficient capacity scaling in optical networks is imperative to keep pace with ever-increasing traffic demands. In this paper, we investigate multi-wavelength transponders as a potential way forward. Suitable system architectures and realistic specifications of multi-wavelength transponders are identified and analyzed in terms of transmit OSNR penalties and spectral constraints. We investigate the performance for different specifications as compared to single-wavelength transponders in a network planning study on two network topologies, developing guidelines for multi-wavelength transponders specifications and their potential benefits. The studies show a reduction in the number of required lasers of up to 83% at the expense of a slight increase in number of lightpaths, demonstrating the potential for significant cost savings and efficiency improvements.
Methods and Tools to Advance the Retrieval of Mathematical Knowledge from Digital Libraries for Search-, Recommendation-, and Assistance-Systems
Authors: Bela Gipp, André Greiner-Petter, Moritz Schubotz, Norman Meuschke
Abstract
This project investigated new approaches and technologies to enhance the accessibility of mathematical content and its semantic information for a broad range of information retrieval applications. To achieve this goal, the project addressed three main research challenges: (1) syntactic analysis of mathematical expressions, (2) semantic enrichment of mathematical expressions, and (3) evaluation using quality metrics and demonstrators. To make our research useful for the research community, we published tools that enable researchers to process mathematical expressions more effectively and efficiently.
Do RESTful API Design Rules Have an Impact on the Understandability of Web APIs? A Web-Based Experiment with API Descriptions
Authors: Justus Bogner, Sebastian Kotstein, Timo Pfaff
Abstract
Context: Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking. Objective: We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this. Method: We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a crossover design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a "rule" and one that was a "violation" of this rule. Participants answered comprehension questions and rated the perceived difficulty. Results: For 11 of the 12 rules, we found that "violation" performed significantly worse than "rule" for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for "violation". Conclusions: Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Towards Versatile and Efficient Visual Knowledge Injection into Pre-trained Language Models with Cross-Modal Adapters
Authors: Xinyun Zhang, Haochen Tan, Han Wu, Mingjie Zhan, Ding Liang, Bei Yu
Abstract
Humans learn language via multi-modal knowledge. However, due to the text-only pre-training scheme, most existing pre-trained language models (PLMs) are hindered from the multi-modal information. To inject visual knowledge into PLMs, existing methods incorporate either the text or image encoder of vision-language models (VLMs) to encode the visual information and update all the original parameters of PLMs for knowledge fusion. In this paper, we propose a new plug-and-play module, X-adapter, to flexibly leverage the aligned visual and textual knowledge learned in pre-trained VLMs and efficiently inject them into PLMs. Specifically, we insert X-adapters into PLMs, and only the added parameters are updated during adaptation. To fully exploit the potential in VLMs, X-adapters consist of two sub-modules, V-expert and T-expert, to fuse VLMs' image and text representations, respectively. We can opt for activating different sub-modules depending on the downstream tasks. Experimental results show that our method can significantly improve the performance on object-color reasoning and natural language understanding (NLU) tasks compared with PLM baselines.
Optimized Schwarz methods for the time-dependent Stokes-Darcy coupling
Abstract
This paper derives optimal coefficients for optimized Schwarz iterations for the time-dependent Stokes-Darcy problem using an innovative strategy to solve a nonstandard min-max problem. The coefficients take into account both physical and discretization parameters that characterize the coupled problem, and they guarantee the robustness of the associated domain decomposition method. Numerical results validate the proposed approach in several test cases with physically relevant parameters.
Reliability Analysis of Gracefully Degrading Automotive Systems
Authors: Philipp Weiss, Ali Younessi, Sebastian Steinhorst
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Fail-operational systems are a prerequisite for autonomous driving. Without a driver who can act as a fallback solution in a critical failure scenario, the system has to be able to mitigate failures on its own and keep critical applications operational. To reduce redundancy cost, graceful degradation can be applied by repurposing hardware resources at run-time. Critical applications can be kept operational by starting passive backups and shutting down non-critical applications instead to make sufficient resources available. In order to design such systems efficiently, the degradation effects on reliability and cost savings have to be analyzed. In this paper we present our approach to formally analyze the impact of graceful degradation on the reliability of critical and non-critical applications. We then quantify the effect of graceful degradation on the reliability of both critical and non-critical applications in distributed automotive systems and compare the achieved cost reduction with conventional redundancy approaches. In our experiments redundancy overhead could be reduced by 80% compared to active redundancy in a scenario with a balanced mix of critical and non-critical applications using our graceful degradation approach Overall, we present a detailed reliability and cost analysis of graceful degradation in distributed automotive systems. Our findings confirm that using graceful degradation can tremendously reduce cost compared to conventional redundancy approaches with no negative impact on the redundancy of critical applications if a reliability reduction of non-critical applications can be accepted. Our results show that a trade-off between the impact of the degradation on the reliability of non-critical applications and cost reduction has to be made.
Design and Development of a Java Parallel I/O Library
Authors: Muhammad Sohaib Ayub, Muhammad Adnan, Muhammad Yasir Shafi
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Parallel I/O refers to the ability of scientific programs to concurrently read/write from/to a single file from multiple processes executing on distributed memory platforms like compute clusters. In the HPC world, I/O becomes a significant bottleneck for many real-world scientific applications. In the last two decades, there has been significant research in improving the performance of I/O operations in scientific computing for traditional languages including C, C++, and Fortran. As a result of this, several mature and high-performance libraries including ROMIO (implementation of MPI-IO), parallel HDF5, Parallel I/O (PIO), and parallel netCDF are available today that provide efficient I/O for scientific applications. However, there is very little research done to evaluate and improve I/O performance of Java-based HPC applications. The main hindrance in the development of efficient parallel I/O Java libraries is the lack of a standard API (something equivalent to MPI-IO). Some adhoc solutions have been developed and used in proprietary applications, but there is no general-purpose solution that can be used by performance hungry applications. As part of this project, we plan to develop a Java-based parallel I/O API inspired by the MPI-IO bindings (MPI 2.0 standard document) for C, C++, and Fortran. Once the Java equivalent API of MPI-IO has been developed, we will develop a reference implementation on top of existing Java messaging libraries. Later, we will evaluate and compare performance of our reference Java Parallel I/O library with C/C++ counterparts using benchmarks and real-world applications.
Knowledge Soft Integration for Multimodal Recommendation
Authors: Kai Ouyang, Chen Tang, Wenhao Zheng, Xiangjin Xie, Xuanji Xiao, Jian Dong, Hai-Tao Zheng, Zhi Wang
Subjects: Information Retrieval (cs.IR); Multimedia (cs.MM)
Abstract
One of the main challenges in modern recommendation systems is how to effectively utilize multimodal content to achieve more personalized recommendations. Despite various proposed solutions, most of them overlook the mismatch between the knowledge gained from independent feature extraction processes and downstream recommendation tasks. Specifically, multimodal feature extraction processes do not incorporate prior knowledge relevant to recommendation tasks, while recommendation tasks often directly use these multimodal features as side information. This mismatch can lead to model fitting biases and performance degradation, which this paper refers to as the \textit{curse of knowledge} problem. To address this issue, we propose using knowledge soft integration to balance the utilization of multimodal features and the curse of knowledge problem it brings about. To achieve this, we put forward a Knowledge Soft Integration framework for the multimodal recommendation, abbreviated as KSI, which is composed of the Structure Efficiently Injection (SEI) module and the Semantic Soft Integration (SSI) module. In the SEI module, we model the modality correlation between items using Refined Graph Neural Network (RGNN), and introduce a regularization term to reduce the redundancy of user/item representations. In the SSI module, we design a self-supervised retrieval task to further indirectly integrate the semantic knowledge of multimodal features, and enhance the semantic discrimination of item representations. Extensive experiments on three benchmark datasets demonstrate the superiority of KSI and validate the effectiveness of its two modules.
Dimension results for extremal-generic polynomial systems over complete toric varieties
Abstract
We study polynomial systems with prescribed monomial supports in the Cox rings of toric varieties built from complete polyhedral fans. We present combinatorial formulas for the dimensions of their associated subvarieties under genericity assumptions on the coefficients of the polynomials. Using these formulas, we identify at which degrees generic systems in polytopal algebras form regular sequences. Our motivation comes from sparse elimination theory, where knowing the expected dimension of these subvarieties leads to specialized algorithms and to large speed-ups for solving sparse polynomial systems. As a special case, we classify the degrees at which regular sequences defined by weighted homogeneous polynomials can be found, answering an open question in the Gr\"obner bases literature. We also show that deciding whether a sparse system is generically a regular sequence in a polytopal algebra is hard from the point of view of theoretical computational complexity.
Optimizing Memory Mapping Using Deep Reinforcement Learning
Authors: Pengming Wang, Mikita Sazanovich, Berkin Ilbeyi, Phitchaya Mangpo Phothilimthana, Manish Purohit, Han Yang Tay, Ngân Vũ, Miaosen Wang, Cosmin Paduraru, Edouard Leurent, Anton Zhernov, Julian Schrittwieser, Thomas Hubert, Robert Tung, Paula Kurylowicz, Kieran Milan, Oriol Vinyals, Daniel J. Mankowitz
Abstract
Resource scheduling and allocation is a critical component of many high impact systems ranging from congestion control to cloud computing. Finding more optimal solutions to these problems often has significant impact on resource and time savings, reducing device wear-and-tear, and even potentially improving carbon emissions. In this paper, we focus on a specific instance of a scheduling problem, namely the memory mapping problem that occurs during compilation of machine learning programs: That is, mapping tensors to different memory layers to optimize execution time. We introduce an approach for solving the memory mapping problem using Reinforcement Learning. RL is a solution paradigm well-suited for sequential decision making problems that are amenable to planning, and combinatorial search spaces with high-dimensional data inputs. We formulate the problem as a single-player game, which we call the mallocGame, such that high-reward trajectories of the game correspond to efficient memory mappings on the target hardware. We also introduce a Reinforcement Learning agent, mallocMuZero, and show that it is capable of playing this game to discover new and improved memory mapping solutions that lead to faster execution times on real ML workloads on ML accelerators. We compare the performance of mallocMuZero to the default solver used by the Accelerated Linear Algebra (XLA) compiler on a benchmark of realistic ML workloads. In addition, we show that mallocMuZero is capable of improving the execution time of the recently published AlphaTensor matrix multiplication model.
Automata with Timers
Authors: Véronique Bruyère, Guillermo A. Pérez, Gaëtan Staquet, Frits W. Vaandrager
Subjects: Formal Languages and Automata Theory (cs.FL)
Abstract
In this work, we study properties of deterministic finite-state automata with timers, a subclass of timed automata proposed by Vaandrager et al. as a candidate for an efficiently learnable timed model. We first study the complexity of the configuration reachability problem for such automata and establish that it is PSPACE-complete. Then, as simultaneous timeouts (we call these, races) can occur in timed runs of such automata, we study the problem of determining whether it is possible to modify the delays between the actions in a run, in a way to avoid such races. The absence of races is important for modelling purposes and to streamline learning of automata with timers. We provide an effective characterization of when an automaton is race-avoiding and establish that the related decision problem is in 3EXP and PSPACE-hard.
Distributed Twins in Edge Computing: Blockchain and IOTA
Authors: Anwar Sadad, Muazzam A. Khan, Baraq Ghaleb, Fadia Ali Khan, Maha Driss, Wadii Boulila, Jawad Ahmad
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Blockchain (BC) and Information for Operational and Tactical Analysis (IOTA) are distributed ledgers that record a huge number of transactions in multiple places at the same time using decentralized databases. Both BC and IOTA facilitate Internet-of-Things (IoT) by overcoming the issues related to traditional centralized systems, such as privacy, security, resources cost, performance, and transparency. Still, IoT faces the potential challenges of real-time processing, resource management, and storage services. Edge computing (EC) has been introduced to tackle the underlying challenges of IoT by providing real-time processing, resource management, and storage services nearer to IoT devices on the network's edge. To make EC more efficient and effective, solutions using BC and IOTA have been devoted to this area. However, BC and IOTA came with their pitfalls. This survey outlines the pitfalls of BC and IOTA in EC and provides research directions to be investigated further.
Accelerating Statewide Connected Vehicles Big (Sensor Fusion) Data ETL Pipelines on GPUs
Authors: Abdul Rashid Mussah, Maged Shoman, Mark Amo-Boateng, Yaw Adu-Gyamfi
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Real-time traffic and sensor data from connected vehicles have the potential to provide insights that will lead to the immediate benefit of efficient management of the transportation infrastructure and related adjacent services. However, the growth of electric vehicles (EVs) and connected vehicles (CVs) has generated an abundance of CV data and sensor data that has put a strain on the processing capabilities of existing data center infrastructure. As a result, the benefits are either delayed or not fully realized. To address this issue, we propose a solution for processing state-wide CV traffic and sensor data on GPUs that provides real-time micro-scale insights in both temporal and spatial dimensions. This is achieved through the use of the Nvidia Rapids framework and the Dask parallel cluster in Python. Our findings demonstrate a 70x acceleration in the extraction, transformation, and loading (ETL) of CV data for the State of Missouri for a full day of all unique CV journeys, reducing the processing time from approximately 48 hours to just 25 minutes. Given that these results are for thousands of CVs and several thousands of individual journeys with sub-second sensor data, implies that we can model and obtain actionable insights for the management of the transportation infrastructure.
Abstract
Adapters have been positioned as a parameter-efficient fine-tuning (PEFT) approach, whereby a minimal number of parameters are added to the model and fine-tuned. However, adapters have not been sufficiently analyzed to understand if PEFT translates to benefits in training/deployment efficiency and maintainability/extensibility. Through extensive experiments on many adapters, tasks, and languages in supervised and cross-lingual zero-shot settings, we clearly show that for Natural Language Understanding (NLU) tasks, the parameter efficiency in adapters does not translate to efficiency gains compared to full fine-tuning of models. More precisely, adapters are relatively expensive to train and have slightly higher deployment latency. Furthermore, the maintainability/extensibility benefits of adapters can be achieved with simpler approaches like multi-task training via full fine-tuning, which also provide relatively faster training times. We, therefore, recommend that for moderately sized models for NLU tasks, practitioners should rely on full fine-tuning or multi-task training rather than using adapters. Our code is available at https://github.com/AI4Bharat/adapter-efficiency.
Dynamically Conservative Self-Driving Planner for Long-Tail Cases
Authors: Weitao Zhou, Zhong Cao, Nanshan Deng, Xiaoyu Liu, Kun Jiang, Diange Yang
Abstract
Self-driving vehicles (SDVs) are becoming reality but still suffer from "long-tail" challenges during natural driving: the SDVs will continually encounter rare, safety-critical cases that may not be included in the dataset they were trained. Some safety-assurance planners solve this problem by being conservative in all possible cases, which may significantly affect driving mobility. To this end, this work proposes a method to automatically adjust the conservative level according to each case's "long-tail" rate, named dynamically conservative planner (DCP). We first define the "long-tail" rate as an SDV's confidence to pass a driving case. The rate indicates the probability of safe-critical events and is estimated using the statistics bootstrapped method with historical data. Then, a reinforcement learning-based planner is designed to contain candidate policies with different conservative levels. The final policy is optimized based on the estimated "long-tail" rate. In this way, the DCP is designed to automatically adjust to be more conservative in low-confidence "long-tail" cases while keeping efficient otherwise. The DCP is evaluated in the CARLA simulator using driving cases with "long-tail" distributed training data. The results show that the DCP can accurately estimate the "long-tail" rate to identify potential risks. Based on the rate, the DCP automatically avoids potential collisions in "long-tail" cases using conservative decisions while not affecting the average velocity in other typical cases. Thus, the DCP is safer and more efficient than the baselines with fixed conservative levels, e.g., an always conservative planner. This work provides a technique to guarantee SDV's performance in unexpected driving cases without resorting to a global conservative setting, which contributes to solving the "long-tail" problem practically.
AGFormer: Efficient Graph Representation with Anchor-Graph Transformer
Authors: Bo Jiang, Fei Xu, Ziyan Zhang, Jin Tang, Feiping Nie
Abstract
To alleviate the local receptive issue of GCN, Transformers have been exploited to capture the long range dependences of nodes for graph data representation and learning. However, existing graph Transformers generally employ regular self-attention module for all node-to-node message passing which needs to learn the affinities/relationships between all node's pairs, leading to high computational cost issue. Also, they are usually sensitive to graph noises. To overcome this issue, we propose a novel graph Transformer architecture, termed Anchor Graph Transformer (AGFormer), by leveraging an anchor graph model. To be specific, AGFormer first obtains some representative anchors and then converts node-to-node message passing into anchor-to-anchor and anchor-to-node message passing process. Thus, AGFormer performs much more efficiently and also robustly than regular node-to-node Transformers. Extensive experiments on several benchmark datasets demonstrate the effectiveness and benefits of proposed AGFormer.
PillarAcc: Sparse PointPillars Accelerator for Real-Time Point Cloud 3D Object Detection on Edge Devices
Authors: Minjae Lee, Hyungmin Kim, Seongmin Park, Minyong Yoon, Janghwan Lee, Junwon Choi, Mingu Kang, Jungwook Choi
Abstract
3D object detection using point cloud (PC) data is vital for autonomous driving perception pipelines, where efficient encoding is key to meeting stringent resource and latency requirements. PointPillars, a widely adopted bird's-eye view (BEV) encoding, aggregates 3D point cloud data into 2D pillars for high-accuracy 3D object detection. However, most state-of-the-art methods employing PointPillar overlook the inherent sparsity of pillar encoding, missing opportunities for significant computational reduction. In this study, we propose a groundbreaking algorithm-hardware co-design that accelerates sparse convolution processing and maximizes sparsity utilization in pillar-based 3D object detection networks. We investigate sparsification opportunities using an advanced pillar-pruning method, achieving an optimal balance between accuracy and sparsity. We introduce PillarAcc, a state-of-the-art sparsity support mechanism that enhances sparse pillar convolution through linear complexity input-output mapping generation and conflict-free gather-scatter memory access. Additionally, we propose dataflow optimization techniques, dynamically adjusting the pillar processing schedule for optimal hardware utilization under diverse sparsity operations. We evaluate PillarAcc on various cutting-edge 3D object detection networks and benchmarks, achieving remarkable speedup and energy savings compared to representative edge platforms, demonstrating record-breaking PointPillars speed of 500FPS with minimal compromise in accuracy.
Understanding Automatic Differentiation Pitfalls
Authors: Jan Hückelheim, Harshitha Menon, William Moses, Bruce Christianson, Paul Hovland, Laurent Hascoët
Abstract
Automatic differentiation, also known as backpropagation, AD, autodiff, or algorithmic differentiation, is a popular technique for computing derivatives of computer programs accurately and efficiently. Sometimes, however, the derivatives computed by AD could be interpreted as incorrect. These pitfalls occur systematically across tools and approaches. In this paper we broadly categorize problematic usages of AD and illustrate each category with examples such as chaos, time-averaged oscillations, discretizations, fixed-point loops, lookup tables, and linear solvers. We also review debugging techniques and their effectiveness in these situations. With this article we hope to help readers avoid unexpected behavior, detect problems more easily when they occur, and have more realistic expectations from AD tools.
Supplementing Gradient-Based Reinforcement Learning with Simple Evolutionary Ideas
Authors: Harshad Khadilkar
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI)
Abstract
We present a simple, sample-efficient algorithm for introducing large but directed learning steps in reinforcement learning (RL), through the use of evolutionary operators. The methodology uses a population of RL agents training with a common experience buffer, with occasional crossovers and mutations of the agents in order to search efficiently through the policy space. Unlike prior literature on combining evolutionary search (ES) with RL, this work does not generate a distribution of agents from a common mean and covariance matrix. Neither does it require the evaluation of the entire population of policies at every time step. Instead, we focus on gradient-based training throughout the life of every policy (individual), with a sparse amount of evolutionary exploration. The resulting algorithm is shown to be robust to hyperparameter variations. As a surprising corollary, we show that simply initialising and training multiple RL agents with a common memory (with no further evolutionary updates) outperforms several standard RL baselines.
Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training
Abstract
Training Generative adversarial networks (GANs) stably is a challenging task. The generator in GANs transform noise vectors, typically Gaussian distributed, into realistic data such as images. In this paper, we propose a novel approach for training GANs with images as inputs, but without enforcing any pairwise constraints. The intuition is that images are more structured than noise, which the generator can leverage to learn a more robust transformation. The process can be made efficient by identifying closely related datasets, or a ``friendly neighborhood'' of the target distribution, inspiring the moniker, Spider GAN. To define friendly neighborhoods leveraging proximity between datasets, we propose a new measure called the signed inception distance (SID), inspired by the polyharmonic kernel. We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets, for instance, between Tiny-ImageNet and CelebA faces. Further, we demonstrate cascading Spider GAN, where the output distribution from a pre-trained GAN generator is used as the input to the subsequent network. Effectively, transporting one distribution to another in a cascaded fashion until the target is learnt -- a new flavor of transfer learning. We demonstrate the efficacy of the Spider approach on DCGAN, conditional GAN, PGGAN, StyleGAN2 and StyleGAN3. The proposed approach achieves state-of-the-art Frechet inception distance (FID) values, with one-fifth of the training iterations, in comparison to their baseline counterparts on high-resolution small datasets such as MetFaces, Ukiyo-E Faces and AFHQ-Cats.
Scalable Coupling of Deep Learning with Logical Reasoning
Authors: Marianne Defresne, Sophie Barbe, Thomas Schiex
Abstract
In the ongoing quest for hybridizing discrete reasoning with neural nets, there is an increasing interest in neural architectures that can learn how to solve discrete reasoning or optimization problems from natural inputs. In this paper, we introduce a scalable neural architecture and loss function dedicated to learning the constraints and criteria of NP-hard reasoning problems expressed as discrete Graphical Models. Our loss function solves one of the main limitations of Besag's pseudo-loglikelihood, enabling learning of high energies. We empirically show it is able to efficiently learn how to solve NP-hard reasoning problems from natural inputs as the symbolic, visual or many-solutions Sudoku problems as well as the energy optimization formulation of the protein design problem, providing data efficiency, interpretability, and \textit{a posteriori} control over predictions.
Efficient Neural Network based Classification and Outlier Detection for Image Moderation using Compressed Sensing and Group Testing
Abstract
Popular social media platforms employ neural network based image moderation engines to classify images uploaded on them as having potentially objectionable content. Such moderation engines must answer a large number of queries with heavy computational cost, even though the actual number of images with objectionable content is usually a tiny fraction. Inspired by recent work on Neural Group Testing, we propose an approach which exploits this fact to reduce the overall computational cost of such engines using the technique of Compressed Sensing (CS). We present the quantitative matrix-pooled neural network (QMPNN), which takes as input $n$ images, and a $m \times n$ binary pooling matrix with $m < n$, whose rows indicate $m$ pools of images i.e. selections of $r$ images out of $n$. The QMPNN efficiently outputs the product of this matrix with the unknown sparse binary vector indicating whether each image is objectionable or not, i.e. it outputs the number of objectionable images in each pool. For suitable matrices, this is decoded using CS decoding algorithms to predict which images were objectionable. The computational cost of running the QMPNN and the CS algorithms is significantly lower than the cost of using a neural network with the same number of parameters separately on each image to classify the images, which we demonstrate via extensive experiments. Our technique is inherently resilient to moderate levels of errors in the prediction from the QMPNN. Furthermore, we present pooled deep outlier detection, which brings CS and group testing techniques to deep outlier detection, to provide for the case when the objectionable images do not belong to a set of pre-defined classes. This technique enables efficient automated moderation of off-topic images shared on topical forums dedicated to sharing images of a certain single class, many of which are currently human-moderated.
The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma
Authors: Dominic LaBella, Maruf Adewole, Michelle Alonso-Basanta, Talissa Altes, Syed Muhammad Anwar, Ujjwal Baid, Timothy Bergquist, Radhika Bhalerao, Sully Chen, Verena Chung, Gian-Marco Conte, Farouk Dako, James Eddy, Ivan Ezhov, Devon Godfrey, Fathi Hilal, Ariana Familiar, Keyvan Farahani, Juan Eugenio Iglesias, Zhifan Jiang, Elaine Johanson, Anahita Fathi Kazerooni, Collin Kent, John Kirkpatrick, Florian Kofler, Koen Van Leemput, Hongwei Bran Li, Xinyang Liu, Aria Mahtabfar, Shan McBurney-Lin, Ryan McLean, Zeke Meier, Ahmed W Moawad, John Mongan, Pierre Nedelec, Maxence Pajot, Marie Piraud, Arif Rashid, Zachary Reitman, Russell Takeshi Shinohara, Yury Velichko, Chunhao Wang, Pranav Warman, Walter Wiggins, Mariam Aboian, Jake Albrecht, Udunna Anazodo, Spyridon Bakas, Adam Flanders, Anastasia Janas, et al. (10 additional authors not shown)
Abstract
Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.
Keyword: faster
Mem-Rec: Memory Efficient Recommendation System using Alternative Representation
Authors: Gopu Krishna Jha, Anthony Thomas, Nilesh Jain, Sameh Gobriel, Tajana Rosing, Ravi Iyer
Subjects: Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Abstract
Deep learning-based recommendation systems (e.g., DLRMs) are widely used AI models to provide high-quality personalized recommendations. Training data used for modern recommendation systems commonly includes categorical features taking on tens-of-millions of possible distinct values. These categorical tokens are typically assigned learned vector representations, that are stored in large embedding tables, on the order of 100s of GB. Storing and accessing these tables represent a substantial burden in commercial deployments. Our work proposes MEM-REC, a novel alternative representation approach for embedding tables. MEM-REC leverages bloom filters and hashing methods to encode categorical features using two cache-friendly embedding tables. The first table (token embedding) contains raw embeddings (i.e. learned vector representation), and the second table (weight embedding), which is much smaller, contains weights to scale these raw embeddings to provide better discriminative capability to each data point. We provide a detailed architecture, design and analysis of MEM-REC addressing trade-offs in accuracy and computation requirements, in comparison with state-of-the-art techniques. We show that MEM-REC can not only maintain the recommendation quality and significantly reduce the memory footprint for commercial scale recommendation models but can also improve the embedding latency. In particular, based on our results, MEM-REC compresses the MLPerf CriteoTB benchmark DLRM model size by 2900x and performs up to 3.4x faster embeddings while achieving the same AUC as that of the full uncompressed model.
ActUp: Analyzing and Consolidating tSNE and UMAP
Authors: Andrew Draganov, Jakob Rødsgaard Jørgensen, Katrine Scheel Nellemann, Davide Mottin, Ira Assent, Tyrus Berry, Cigdem Aslay
Abstract
tSNE and UMAP are popular dimensionality reduction algorithms due to their speed and interpretable low-dimensional embeddings. Despite their popularity, however, little work has been done to study their full span of differences. We theoretically and experimentally evaluate the space of parameters in both tSNE and UMAP and observe that a single one -- the normalization -- is responsible for switching between them. This, in turn, implies that a majority of the algorithmic differences can be toggled without affecting the embeddings. We discuss the implications this has on several theoretic claims behind UMAP, as well as how to reconcile them with existing tSNE interpretations. Based on our analysis, we provide a method (\ourmethod) that combines previously incompatible techniques from tSNE and UMAP and can replicate the results of either algorithm. This allows our method to incorporate further improvements, such as an acceleration that obtains either method's outputs faster than UMAP. We release improved versions of tSNE, UMAP, and \ourmethod that are fully plug-and-play with the traditional libraries at https://github.com/Andrew-Draganov/GiDR-DUN
Optimizing Memory Mapping Using Deep Reinforcement Learning
Authors: Pengming Wang, Mikita Sazanovich, Berkin Ilbeyi, Phitchaya Mangpo Phothilimthana, Manish Purohit, Han Yang Tay, Ngân Vũ, Miaosen Wang, Cosmin Paduraru, Edouard Leurent, Anton Zhernov, Julian Schrittwieser, Thomas Hubert, Robert Tung, Paula Kurylowicz, Kieran Milan, Oriol Vinyals, Daniel J. Mankowitz
Abstract
Resource scheduling and allocation is a critical component of many high impact systems ranging from congestion control to cloud computing. Finding more optimal solutions to these problems often has significant impact on resource and time savings, reducing device wear-and-tear, and even potentially improving carbon emissions. In this paper, we focus on a specific instance of a scheduling problem, namely the memory mapping problem that occurs during compilation of machine learning programs: That is, mapping tensors to different memory layers to optimize execution time. We introduce an approach for solving the memory mapping problem using Reinforcement Learning. RL is a solution paradigm well-suited for sequential decision making problems that are amenable to planning, and combinatorial search spaces with high-dimensional data inputs. We formulate the problem as a single-player game, which we call the mallocGame, such that high-reward trajectories of the game correspond to efficient memory mappings on the target hardware. We also introduce a Reinforcement Learning agent, mallocMuZero, and show that it is capable of playing this game to discover new and improved memory mapping solutions that lead to faster execution times on real ML workloads on ML accelerators. We compare the performance of mallocMuZero to the default solver used by the Accelerated Linear Algebra (XLA) compiler on a benchmark of realistic ML workloads. In addition, we show that mallocMuZero is capable of improving the execution time of the recently published AlphaTensor matrix multiplication model.
Abstract
Adapters have been positioned as a parameter-efficient fine-tuning (PEFT) approach, whereby a minimal number of parameters are added to the model and fine-tuned. However, adapters have not been sufficiently analyzed to understand if PEFT translates to benefits in training/deployment efficiency and maintainability/extensibility. Through extensive experiments on many adapters, tasks, and languages in supervised and cross-lingual zero-shot settings, we clearly show that for Natural Language Understanding (NLU) tasks, the parameter efficiency in adapters does not translate to efficiency gains compared to full fine-tuning of models. More precisely, adapters are relatively expensive to train and have slightly higher deployment latency. Furthermore, the maintainability/extensibility benefits of adapters can be achieved with simpler approaches like multi-task training via full fine-tuning, which also provide relatively faster training times. We, therefore, recommend that for moderately sized models for NLU tasks, practitioners should rely on full fine-tuning or multi-task training rather than using adapters. Our code is available at https://github.com/AI4Bharat/adapter-efficiency.
Gallery Sampling for Robust and Fast Face Identification
Abstract
Deep learning methods have been achieved brilliant results in face recognition. One of the important tasks to improve the performance is to collect and label images as many as possible. However, labeling identities and checking qualities of large image data are difficult task and mistakes cannot be avoided in processing large data. Previous works have been trying to deal with the problem only in training domain, however it can cause much serious problem if the mistakes are in gallery data of face identification. We proposed gallery data sampling methods which are robust to outliers including wrong labeled, low quality, and less-informative images and reduce searching time. The proposed sampling-by-pruning and sampling-by-generating methods significantly improved face identification performance on our 5.4M web image dataset of celebrities. The proposed method achieved 0.0975 in terms of FNIR at FPIR=0.01, while conventional method showed 0.3891. The average number of feature vectors for each individual gallery was reduced to 17.1 from 115.9 and it can provide much faster search. We also made experiments on public datasets and our method achieved 0.1314 and 0.0668 FNIRs at FPIR=0.01 on the CASIA-WebFace and MS1MV2, while the convectional method did 0.5446, and 0.1327, respectively.
Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training
Abstract
Training Generative adversarial networks (GANs) stably is a challenging task. The generator in GANs transform noise vectors, typically Gaussian distributed, into realistic data such as images. In this paper, we propose a novel approach for training GANs with images as inputs, but without enforcing any pairwise constraints. The intuition is that images are more structured than noise, which the generator can leverage to learn a more robust transformation. The process can be made efficient by identifying closely related datasets, or a ``friendly neighborhood'' of the target distribution, inspiring the moniker, Spider GAN. To define friendly neighborhoods leveraging proximity between datasets, we propose a new measure called the signed inception distance (SID), inspired by the polyharmonic kernel. We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets, for instance, between Tiny-ImageNet and CelebA faces. Further, we demonstrate cascading Spider GAN, where the output distribution from a pre-trained GAN generator is used as the input to the subsequent network. Effectively, transporting one distribution to another in a cascaded fashion until the target is learnt -- a new flavor of transfer learning. We demonstrate the efficacy of the Spider approach on DCGAN, conditional GAN, PGGAN, StyleGAN2 and StyleGAN3. The proposed approach achieves state-of-the-art Frechet inception distance (FID) values, with one-fifth of the training iterations, in comparison to their baseline counterparts on high-resolution small datasets such as MetFaces, Ukiyo-E Faces and AFHQ-Cats.
Keyword: mobile
Enhanced Hybrid Automatic Repeat Request Scheduling for Non-Terrestrial IoT Networks
Authors: Gautham Prasad, Vishnu Rajendra Chandrika, Lutz Lampe, Gus Vos
Subjects: Networking and Internet Architecture (cs.NI); Signal Processing (eess.SP)
Abstract
Non-terrestrial networks (NTNs) complement their terrestrial counterparts in enabling ubiquitous connectivity globally by serving unserved and/or underserved areas of the world. While supporting enhanced mobile broadband (eMBB) data over NTNs has been extensively studied in the past, focus on massive machine type communication (mMTC) over NTNs is currently growing, as also witnessed by the new study and work items included into the 3rd generation partnership project (3GPP) agenda for commissioning specifications for Internet-of-Things (IoT) communications over NTNs. Supporting mMTC in non-terrestrial cellular IoT (C-IoT) networks requires jointly addressing the unique challenges introduced in NTNs and CIoT communications. In this paper, we tackle one such issue caused due to the extended round-trip time and increased path loss in NTNs resulting in a degraded network throughput. We propose smarter transport blocks scheduling methods that can increase the efficiency of resource utilization. We conduct end-to-end link-level simulations of C-IoT traffic over NTNs and present numerical results of the data rate gains achieved to show the performance of our proposed solutions against legacy scheduling methods.
Dish detection in food platters: A framework for automated diet logging and nutrition management
Abstract
Diet is central to the epidemic of lifestyle disorders. Accurate and effortless diet logging is one of the significant bottlenecks for effective diet management and calorie restriction. Dish detection from food platters is a challenging problem due to a visually complex food layout. We present an end-to-end computational framework for diet management, from data compilation, annotation, and state-of-the-art model identification to its mobile app implementation. As a case study, we implement the framework in the context of Indian food platters known for their complex presentation that poses a challenge for the automated detection of dishes. Starting with the 61 most popular Indian dishes, we identify the state-of-the-art model through a comparative analysis of deep-learning-based object detection architectures. Rooted in a meticulous compilation of 68,005 platter images with 134,814 manual dish annotations, we first compare ten architectures for multi-label classification to identify ResNet152 (mAP=84.51%) as the best model. YOLOv8x (mAP=87.70%) emerged as the best model architecture for dish detection among the eight deep-learning models implemented after a thorough performance evaluation. By comparing with the state-of-the-art model for the IndianFood10 dataset, we demonstrate the superior object detection performance of YOLOv8x for this subset and establish Resnet152 as the best architecture for multi-label classification. The models thus trained on richly annotated data can be extended to include dishes from across global cuisines. The proposed framework is demonstrated through a proof-of-concept mobile application with diverse applications for diet logging, food recommendation systems, nutritional interventions, and mitigation of lifestyle disorders.
Keyword: pruning
Graph Neural Network for Accurate and Low-complexity SAR ATR
Authors: Bingyi Zhang, Sasindu Wijeratne, Rajgopal Kannan, Viktor Prasanna, Carl Busart
Subjects: Computer Vision and Pattern Recognition (cs.CV); Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) is the key technique for remote sensing image recognition. The state-of-the-art works exploit the deep convolutional neural networks (CNNs) for SAR ATR, leading to high computation costs. These deep CNN models are unsuitable to be deployed on resource-limited platforms. In this work, we propose a graph neural network (GNN) model to achieve accurate and low-latency SAR ATR. We transform the input SAR image into the graph representation. The proposed GNN model consists of a stack of GNN layers that operates on the input graph to perform target classification. Unlike the state-of-the-art CNNs, which need heavy convolution operations, the proposed GNN model has low computation complexity and achieves comparable high accuracy. The GNN-based approach enables our proposed \emph{input pruning} strategy. By filtering out the irrelevant vertices in the input graph, we can reduce the computation complexity. Moreover, we propose the \emph{model pruning} strategy to sparsify the model weight matrices which further reduces the computation complexity. We evaluate the proposed GNN model on the MSTAR dataset and ship discrimination dataset. The evaluation results show that the proposed GNN model achieves 99.38\% and 99.7\% classification accuracy on the above two datasets, respectively. The proposed pruning strategies can prune 98.6\% input vertices and 97\% weight entries with negligible accuracy loss. Compared with the state-of-the-art CNNs, the proposed GNN model has only 1/3000 computation cost and 1/80 model size.
Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems
Authors: Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
Abstract
Federated Learning (FL) is a privacy-preserving distributed machine learning approach geared towards applications in edge devices. However, the problem of designing custom neural architectures in federated environments is not tackled from the perspective of overall system efficiency. In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space. We propose a novel diversified sampling strategy that balances exploration and exploitation of the search space by initially maximizing the distance between the samples and progressively shrinking this distance as the training progresses. We then perform channel pruning to reduce the training complexity at the devices further. We show that our approach outperforms several sampling strategies including Hadamard sampling, where the samples are maximally separated. We evaluate our method on the CIFAR10, CIFAR100, EMNIST, and TinyImagenet benchmarks and show a comprehensive analysis of different aspects of federated learning such as scalability, and non-IID data. DC-NAS achieves near iso-accuracy as compared to full-scale federated NAS with 50% fewer resources.
Gallery Sampling for Robust and Fast Face Identification
Abstract
Deep learning methods have been achieved brilliant results in face recognition. One of the important tasks to improve the performance is to collect and label images as many as possible. However, labeling identities and checking qualities of large image data are difficult task and mistakes cannot be avoided in processing large data. Previous works have been trying to deal with the problem only in training domain, however it can cause much serious problem if the mistakes are in gallery data of face identification. We proposed gallery data sampling methods which are robust to outliers including wrong labeled, low quality, and less-informative images and reduce searching time. The proposed sampling-by-pruning and sampling-by-generating methods significantly improved face identification performance on our 5.4M web image dataset of celebrities. The proposed method achieved 0.0975 in terms of FNIR at FPIR=0.01, while conventional method showed 0.3891. The average number of feature vectors for each individual gallery was reduced to 17.1 from 115.9 and it can provide much faster search. We also made experiments on public datasets and our method achieved 0.1314 and 0.0668 FNIRs at FPIR=0.01 on the CASIA-WebFace and MS1MV2, while the convectional method did 0.5446, and 0.1327, respectively.
PillarAcc: Sparse PointPillars Accelerator for Real-Time Point Cloud 3D Object Detection on Edge Devices
Authors: Minjae Lee, Hyungmin Kim, Seongmin Park, Minyong Yoon, Janghwan Lee, Junwon Choi, Mingu Kang, Jungwook Choi
Abstract
3D object detection using point cloud (PC) data is vital for autonomous driving perception pipelines, where efficient encoding is key to meeting stringent resource and latency requirements. PointPillars, a widely adopted bird's-eye view (BEV) encoding, aggregates 3D point cloud data into 2D pillars for high-accuracy 3D object detection. However, most state-of-the-art methods employing PointPillar overlook the inherent sparsity of pillar encoding, missing opportunities for significant computational reduction. In this study, we propose a groundbreaking algorithm-hardware co-design that accelerates sparse convolution processing and maximizes sparsity utilization in pillar-based 3D object detection networks. We investigate sparsification opportunities using an advanced pillar-pruning method, achieving an optimal balance between accuracy and sparsity. We introduce PillarAcc, a state-of-the-art sparsity support mechanism that enhances sparse pillar convolution through linear complexity input-output mapping generation and conflict-free gather-scatter memory access. Additionally, we propose dataflow optimization techniques, dynamically adjusting the pillar processing schedule for optimal hardware utilization under diverse sparsity operations. We evaluate PillarAcc on various cutting-edge 3D object detection networks and benchmarks, achieving remarkable speedup and energy savings compared to representative edge platforms, demonstrating record-breaking PointPillars speed of 500FPS with minimal compromise in accuracy.
Keyword: voxel
Geometric Modeling and Physics Simulation Framework for Building a Digital Twin of Extrusion-based Additive Manufacturing
Abstract
Accurate simulation of the printing process is essential for improving print quality, reducing waste, and optimizing the printing parameters of extrusion-based additive manufacturing. Traditional additive manufacturing simulations are very compute-intensive and are not scalable to simulate even moderately-sized geometries. In this paper, we propose a general framework for creating a digital twin of the dynamic printing process by performing physics simulations with the intermediate print geometries. Our framework takes a general extrusion-based additive manufacturing G-code, generates an analysis-suitable voxelized geometry representation from the print schedule, and performs physics-based (transient thermal and phase change) simulations of the printing process. Our approach leverages parallel adaptive octree meshes for both voxelated geometry representation as well as for fast simulations to address real-time predictions. We demonstrate the effectiveness of our method by simulating the printing of complex geometries at high voxel resolutions with both sparse and dense infills. Our results show that this approach scales to high voxel resolutions and can predict the transient heat distribution as the print progresses. This work lays the computational and algorithmic foundations for building real-time digital twins and performing rapid virtual print sequence exploration to improve print quality and further reduce material waste.
Keyword: lidar
MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features
Authors: Bo Zhou, Jiapeng Xie, Yan Pan, Jiajie Wu, Chuanzhao Lu
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
Abstract
Identifying moving objects is an essential capability for autonomous systems, as it provides critical information for pose estimation, navigation, collision avoidance and static map construction. In this paper, we present MotionBEV, a fast and accurate framework for LiDAR moving object segmentation, which segments moving objects with appearance and motion features in bird's eye view (BEV) domain. Our approach converts 3D LiDAR scans into 2D polar BEV representation to achieve real-time performance. Specifically, we learn appearance features with a simplified PointNet, and compute motion features through the height differences of consecutive frames of point clouds projected onto vertical columns in the polar BEV coordinate system. We employ a dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM) to adaptively fuse the spatio-temporal information from appearance and motion features. Our approach achieves state-of-the-art performance on the SemanticKITTI-MOS benchmark, with an average inference time of 23ms on an RTX 3090 GPU. Furthermore, to demonstrate the practical effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a solid-state LiDAR, which features non-repetitive scanning patterns and small field of view.
Keyword: diffusion
Hawkes Process based on Controlled Differential Equations
Abstract
Hawkes processes are a popular framework to model the occurrence of sequential events, i.e., occurrence dynamics, in several fields such as social diffusion. In real-world scenarios, the inter-arrival time among events is irregular. However, existing neural network-based Hawkes process models not only i) fail to capture such complicated irregular dynamics, but also ii) resort to heuristics to calculate the log-likelihood of events since they are mostly based on neural networks designed for regular discrete inputs. To this end, we present the concept of Hawkes process based on controlled differential equations (HP-CDE), by adopting the neural controlled differential equation (neural CDE) technology which is an analogue to continuous RNNs. Since HP-CDE continuously reads data, i) irregular time-series datasets can be properly treated preserving their uneven temporal spaces, and ii) the log-likelihood can be exactly computed. Moreover, as both Hawkes processes and neural CDEs are first developed to model complicated human behavioral dynamics, neural CDE-based Hawkes processes are successful in modeling such occurrence dynamics. In our experiments with 4 real-world datasets, our method outperforms existing methods by non-trivial margins.
On a Voter Model with Context-Dependent Opinion Adoption
Authors: Luca Becchetti, Vincenzo Bonifaci, Emilio Cruciani, Francesco Pasquale
Subjects: Multiagent Systems (cs.MA); Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Opinion diffusion is a crucial phenomenon in social networks, often underlying the way in which a collective of agents develops a consensus on relevant decisions. The voter model is a well-known theoretical model to study opinion spreading in social networks and structured populations. Its simplest version assumes that an updating agent will adopt the opinion of a neighboring agent chosen at random. The model allows us to study, for example, the probability that a certain opinion will fixate into a consensus opinion, as well as the expected time it takes for a consensus opinion to emerge. Standard voter models are oblivious to the opinions held by the agents involved in the opinion adoption process. We propose and study a context-dependent opinion spreading process on an arbitrary social graph, in which the probability that an agent abandons opinion $a$ in favor of opinion $b$ depends on both $a$ and $b$. We discuss the relations of the model with existing voter models and then derive theoretical results for both the fixation probability and the expected consensus time for two opinions, for both the synchronous and the asynchronous update models.
$α$-robust error estimates of general non-uniform time-step numerical schemes for reaction-subdiffusion problems
Abstract
Numerous error estimates have been carried out on various numerical schemes for subdiffusion equations. Unfortunately most error bounds suffer from a factor $1/(1-\alpha)$ or $\Gamma(1-\alpha)$, which blows up as the fractional order $\alpha\to 1^-$, a phenomenon not consistent with regularity of the continuous problem and numerical simulations in practice. Although efforts have been made to avoid the factor blow-up phenomenon, a robust analysis of error estimates still remains incomplete for numerical schemes with general nonuniform time steps. In this paper, we will consider the $\alpha$-robust error analysis of convolution-type schemes for subdiffusion equations with general nonuniform time-steps, and provide explicit factors in error bounds with dependence information on $\alpha$ and temporal mesh sizes. As illustration, we apply our abstract framework to two widely used schemes, i.e., the L1 scheme and Alikhanov's scheme. Our rigorous proofs reveal that the stability and convergence of a class of convolution-type schemes is $\alpha$-robust, i.e., the factor will not blowup while $\alpha\to 1^-$ with general nonuniform time steps even when rather general initial regularity condition is considered.
Keyword: dynamic
Hawkes Process based on Controlled Differential Equations
Abstract
Hawkes processes are a popular framework to model the occurrence of sequential events, i.e., occurrence dynamics, in several fields such as social diffusion. In real-world scenarios, the inter-arrival time among events is irregular. However, existing neural network-based Hawkes process models not only i) fail to capture such complicated irregular dynamics, but also ii) resort to heuristics to calculate the log-likelihood of events since they are mostly based on neural networks designed for regular discrete inputs. To this end, we present the concept of Hawkes process based on controlled differential equations (HP-CDE), by adopting the neural controlled differential equation (neural CDE) technology which is an analogue to continuous RNNs. Since HP-CDE continuously reads data, i) irregular time-series datasets can be properly treated preserving their uneven temporal spaces, and ii) the log-likelihood can be exactly computed. Moreover, as both Hawkes processes and neural CDEs are first developed to model complicated human behavioral dynamics, neural CDE-based Hawkes processes are successful in modeling such occurrence dynamics. In our experiments with 4 real-world datasets, our method outperforms existing methods by non-trivial margins.
Automated Data Denoising for Recommendation
Authors: Yingqiang Ge, Mostafa Rahmani, Athirai Irissappane, Jose Sepulveda, Fei Wang, James Caverlee, Yongfeng Zhang
Abstract
In real-world scenarios, most platforms collect both large-scale, naturally noisy implicit feedback and small-scale yet highly relevant explicit feedback. Due to the issue of data sparsity, implicit feedback is often the default choice for training recommender systems (RS), however, such data could be very noisy due to the randomness and diversity of user behaviors. For instance, a large portion of clicks may not reflect true user preferences and many purchases may result in negative reviews or returns. Fortunately, by utilizing the strengths of both types of feedback to compensate for the weaknesses of the other, we can mitigate the above issue at almost no cost. In this work, we propose an Automated Data Denoising framework, \textbf{\textit{AutoDenoise}}, for recommendation, which uses a small number of explicit data as validation set to guide the recommender training. Inspired by the generalized definition of curriculum learning (CL), AutoDenoise learns to automatically and dynamically assign the most appropriate (discrete or continuous) weights to each implicit data sample along the training process under the guidance of the validation performance. Specifically, we use a delicately designed controller network to generate the weights, combine the weights with the loss of each input data to train the recommender system, and optimize the controller with reinforcement learning to maximize the expected accuracy of the trained RS on the noise-free validation set. Thorough experiments indicate that AutoDenoise is able to boost the performance of the state-of-the-art recommendation algorithms on several public benchmark datasets.
Theoretical Analyses of Evolutionary Algorithms on Time-Linkage OneMax with General Weights
Authors: Weijie Zheng, Xin Yao
Subjects: Neural and Evolutionary Computing (cs.NE)
Abstract
Evolutionary computation has shown its superiority in dynamic optimization, but for the (dynamic) time-linkage problems, some theoretical studies have revealed the possible weakness of evolutionary computation. Since the theoretically analyzed time-linkage problem only considers the influence of an extremely strong negative time-linkage effect, it remains unclear whether the weakness also appears in problems with more general time-linkage effects. Besides, understanding in depth the relationship between time-linkage effect and algorithmic features is important to build up our knowledge of what algorithmic features are good at what kinds of problems. In this paper, we analyze the general time-linkage effect and consider the time-linkage OneMax with general weights whose absolute values reflect the strength and whose sign reflects the positive or negative influence. We prove that except for some small and positive time-linkage effects (that is, for weights $0$ and $1$), randomized local search (RLS) and (1+1)EA cannot converge to the global optimum with a positive probability. More precisely, for the negative time-linkage effect (for negative weights), both algorithms cannot efficiently reach the global optimum and the probability of failing to converge to the global optimum is at least $1-o(1)$. For the not so small positive time-linkage effect (positive weights greater than $1$), such a probability is at most $c+o(1)$ where $c$ is a constant strictly less than $1$.
Dynamic Routing in Stochastic Urban Air Mobility Networks: A Markov Decision Process Approach
Abstract
Urban air mobility (UAM) is an emerging concept in short-range aviation transportation, where the aircraft will take off, land, and charge their batteries at a set of vertistops, and travel only through a set of flight corridors connecting these vertistops. We study the problem of routing an electric aircraft from its origin vertistop to its destination vertistop with the minimal expected total travel time. We first introduce a UAM network model that accounts for the limited battery capacity of aircraft, stochastic travel times of flight corridors, stochastic queueing delays, and a limited number of battery-charging stations at vertistops. Based on this model, we provide a sufficient condition for the existence of a routing strategy that avoids battery exhaustion. Furthermore, we show how to compute such a strategy by computing the optimal policy in a Markov decision process, a mathematical framework for decision-making in a stochastic dynamic environment. We illustrate our results using a case study with 29 vertistops and 137 flight corridors.
Geometric Modeling and Physics Simulation Framework for Building a Digital Twin of Extrusion-based Additive Manufacturing
Abstract
Accurate simulation of the printing process is essential for improving print quality, reducing waste, and optimizing the printing parameters of extrusion-based additive manufacturing. Traditional additive manufacturing simulations are very compute-intensive and are not scalable to simulate even moderately-sized geometries. In this paper, we propose a general framework for creating a digital twin of the dynamic printing process by performing physics simulations with the intermediate print geometries. Our framework takes a general extrusion-based additive manufacturing G-code, generates an analysis-suitable voxelized geometry representation from the print schedule, and performs physics-based (transient thermal and phase change) simulations of the printing process. Our approach leverages parallel adaptive octree meshes for both voxelated geometry representation as well as for fast simulations to address real-time predictions. We demonstrate the effectiveness of our method by simulating the printing of complex geometries at high voxel resolutions with both sparse and dense infills. Our results show that this approach scales to high voxel resolutions and can predict the transient heat distribution as the print progresses. This work lays the computational and algorithmic foundations for building real-time digital twins and performing rapid virtual print sequence exploration to improve print quality and further reduce material waste.
COLA: Characterizing and Optimizing the Tail Latency for Safe Level-4 Autonomous Vehicle Systems
Authors: Haolan Liu, Zixuan Wang, Jishen Zhao
Subjects: Robotics (cs.RO); Operating Systems (cs.OS); Performance (cs.PF)
Abstract
Autonomous vehicles (AVs) are envisioned to revolutionize our life by providing safe, relaxing, and convenient ground transportation. The computing systems in such vehicles are required to interpret various sensor data and generate responses to the environment in a timely manner to ensure driving safety. However, such timing-related safety requirements are largely unexplored in prior works. In this paper, we conduct a systematic study to understand the timing requirements of AV systems. We focus on investigating and mitigating the sources of tail latency in Level-4 AV computing systems. We observe that the performance of AV algorithms is not uniformly distributed -- instead, the latency is susceptible to vehicle environment fluctuations, such as traffic density. This contributes to burst computation and memory access in response to the traffic, and further leads to tail latency in the system. Furthermore, we observe that tail latency also comes from a mismatch between the pre-configured AV computation pipeline and the dynamic latency requirements in real-world driving scenarios. Based on these observations, we propose a set of system designs to mitigate AV tail latency. We demonstrate our design on widely-used industrial Level-4 AV systems, Baidu Apollo and Autoware. The evaluation shows that our design achieves 1.65 X improvement over the worst-case latency and 1.3 X over the average latency, and avoids 93% of accidents on Apollo.
Simultaneous Modeling of In Vivo and In Vitro Effects of Nondepolarizing Neuromuscular Blocking Drugs
Abstract
Nondepolarizing neuromuscular blocking drugs (NDNBs) are clinically used to produce muscle relaxation during general anesthesia. This paper explores a suitable model structure and its parameters to simultaneously describe in vivo and in vitro effects of three clinically used NDNBs, cisatracurium, vecuronium, and rocuronium. In particular, it is discussed how to reconcile an apparent discrepancy that rocuronium is less potent at inducing muscle relaxation in vivo than predicted from in vitro experiments. We develop a framework for estimating model parameters from published in vivo and in vitro data, and thereby compare the descriptive abilities of several candidate models. As a result, it is shown that a dynamic modeling of the kinetics of competitive binding of acetylcholine (ACh) and NDNB molecules to ACh receptors (AChRs) is effective, and the above discrepancy can be resolved if we assume that the in vivo concentration of ACh is relatively low to activate only a part of AChRs, whereas more than 95% of AChRs are activated during in vitro experiments, and that the site-selectivity is smaller for rocuronium than those for cisatracurium and vecuronium.
Model Predictive Control of Smart Districts Participating in Frequency Regulation Market: A Case Study of Using Heating Network Storage
Authors: Hikaru Hoshino, T. John Koo, Yun-Chung Chu, Yoshihiko Susuki
Abstract
Flexibility provided by Combined Heat and Power (CHP) units in district heating networks is an important means to cope with increasing penetration of intermittent renewable energy resources, and various methods have been proposed to exploit thermal storage tanks installed in these networks. This paper studies a novel problem motivated by an example of district heating and cooling networks in Japan, where high-temperature steam is used as the heating medium. In steam-based networks, storage tanks are usually absent, and there is a strong need to utilize thermal inertia of the pipeline network as storage. However, this type of use of a heating network directly affects the operating condition of the network, and assuring safety and supply quality at the use side is an open problem. To address this, we formulate a novel control problem to utilize CHP units in frequency regulation market while satisfying physical constraints on a steam network described by a nonlinear model capturing dynamics of heat flows and heat accumulation in the network. Furthermore, a Model Predictive Control (MPC) framework is proposed to solve this problem. By consistently combining several nonlinear control techniques, a computationally efficient MPC controller is obtained and shown to work in real-time.
Scaling Laws of Dynamic High-Capacity Ride-Sharing
Abstract
Dynamic ride-sharing services, including ride-pooling offered by ride-hailing platforms and demand-responsive buses, have become an essential part of urban mobility systems. These services cater to personalized and on-demand mobility requirements while simultaneously improving efficiency and sustainability by accommodating several trip requests within a single ride. However, quantifying the advantages and disadvantages of dynamic ride-sharing, particularly high-capacity ride-sharing, remains a challenge due to the complex dynamics that depend on several factors, including matching algorithms, vehicle capacity, transportation network topology, and spatiotemporal demand and supply distribution. In this study, we conduct extensive experiments on an agent-based simulation platform calibrated by real-world mobility data from Chengdu, Hong Kong, and Manhattan. Our findings reveal a few scaling laws that can effectively measure how key performance metrics such as passenger service rate and vehicle occupancy rate change with a dimensionless system loading factor that reflects the relative magnitude of demand versus supply. Moreover, our results indicate that these scaling laws are universal for different network topologies and supply-demand situations. As a result, these scaling laws offer a means for urban planners, city managers, and ride-hailing platforms to quantify the potential benefits and drawbacks of dynamic ride-sharing under different circumstances and to design better operational and regulatory strategies.
T-former: An Efficient Transformer for Image Inpainting
Authors: Ye Deng, Siqi Hui, Sanping Zhou, Deyu Meng, Jinjun Wang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Benefiting from powerful convolutional neural networks (CNNs), learning-based image inpainting methods have made significant breakthroughs over the years. However, some nature of CNNs (e.g. local prior, spatially shared parameters) limit the performance in the face of broken images with diverse and complex forms. Recently, a class of attention-based network architectures, called transformer, has shown significant performance on natural language processing fields and high-level vision tasks. Compared with CNNs, attention operators are better at long-range modeling and have dynamic weights, but their computational complexity is quadratic in spatial resolution, and thus less suitable for applications involving higher resolution images, such as image inpainting. In this paper, we design a novel attention linearly related to the resolution according to Taylor expansion. And based on this attention, a network called $T$-former is designed for image inpainting. Experiments on several benchmark datasets demonstrate that our proposed method achieves state-of-the-art accuracy while maintaining a relatively low number of parameters and computational complexity. The code can be found at \href{https://github.com/dengyecode/T-former_image_inpainting}{github.com/dengyecode/T-former\_image\_inpainting}
A Lightweight Authentication Protocol against Modeling Attacks based on a Novel LFSR-APUF
Authors: Yao Wang, Xue Mei, Zhengtai Chang, Wenbing Fan, Benqing Guo, Zhi Quan
Abstract
Simple authentication protocols based on conventional physical unclonable function (PUF) are vulnerable to modeling attacks and other security threats. This paper proposes an arbiter PUF based on a linear feedback shift register (LFSR-APUF). Different from the previously reported linear feedback shift register for challenge extension, the proposed scheme feeds the external random challenges into the LFSR module to obfuscate the linear mapping relationship between the challenge and response. It can prevent attackers from obtaining valid challenge-response pairs (CRPs), increasing its resistance to modeling attacks significantly. A 64-stage LFSR-APUF has been implemented on a field programmable gate array (FPGA) board. The experimental results reveal that the proposed design can effectively resist various modeling attacks such as logistic regression (LR), evolutionary strategy (ES), Artificial Neuro Network (ANN), and support vector machine (SVM) with a prediction rate of 51.79% and a slight effect on the randomness, reliability, and uniqueness. Further, a lightweight authentication protocol is established based on the proposed LFSR-APUF. The protocol incorporates a low-overhead, ultra-lightweight, novel private bit conversion Cover function that is uniquely bound to each device in the authentication network. A dynamic and timevariant obfuscation scheme in combination with the proposed LFSR-APUF is implemented in the protocol. The proposed authentication protocol not only resists spoofing attacks, physical attacks, and modeling attacks effectively, but also ensures the security of the entire authentication network by transferring important information in encrypted form from the server to the database even when the attacker completely controls the server.
Learning Quadruped Locomotion using Bio-Inspired Neural Networks with Intrinsic Rhythmicity
Authors: Chuanyu Yang, Can Pu, Tianqi Wei, Cong Wang, Zhibin Li
Abstract
Biological studies reveal that neural circuits located at the spinal cord called central pattern generator (CPG) oscillates and generates rhythmic signals, which are the underlying mechanism responsible for rhythmic locomotion behaviors of animals. Inspired by CPG's capability to naturally generate rhythmic patterns, researchers have attempted to create mathematical models of CPG and utilize them for the locomotion of legged robots. In this paper, we propose a network architecture that incorporates CPGs for rhythmic pattern generation and a multi-layer perceptron (MLP) network for sensory feedback. We also proposed a method that reformulates CPGs into a fully-differentiable stateless network, allowing CPGs and MLP to be jointly trained with gradient-based learning. The results show that our proposed method learned agile and dynamic locomotion policies which are capable of blind traversal over uneven terrain and resist external pushes. Simulation results also show that the learned policies are capable of self-modulating step frequency and step length to adapt to the locomotion velocity.
Adaptive and Flexible Model-Based AI for Deep Receivers in Dynamic Channels
Authors: Tomer Raviv, Sangwoo Park, Osvaldo Simeone, Yonina C. Eldar, Nir Shlezinger
Abstract
Artificial intelligence (AI) is envisioned to play a key role in future wireless technologies, with deep neural networks (DNNs) enabling digital receivers to learn to operate in challenging communication scenarios. However, wireless receiver design poses unique challenges that fundamentally differ from those encountered in traditional deep learning domains. The main challenges arise from the limited power and computational resources of wireless devices, as well as from the dynamic nature of wireless communications, which causes continual changes to the data distribution. These challenges impair conventional AI based on highly-parameterized DNNs, motivating the development of adaptive, flexible, and light-weight AI for wireless communications, which is the focus of this article. Here, we propose that AI-based design of wireless receivers requires rethinking of the three main pillars of AI: architecture, data, and training algorithms. In terms of architecture, we review how to design compact DNNs via model-based deep learning. Then, we discuss how to acquire training data for deep receivers without compromising spectral efficiency. Finally, we review efficient, reliable, and robust training algorithms via meta-learning and generalized Bayesian learning. Numerical results are presented to demonstrate the complementary effectiveness of each of the surveyed methods. We conclude by presenting opportunities for future research on the development of practical deep receivers
S-REINFORCE: A Neuro-Symbolic Policy Gradient Approach for Interpretable Reinforcement Learning
Abstract
This paper presents a novel RL algorithm, S-REINFORCE, which is designed to generate interpretable policies for dynamic decision-making tasks. The proposed algorithm leverages two types of function approximators, namely Neural Network (NN) and Symbolic Regressor (SR), to produce numerical and symbolic policies, respectively. The NN component learns to generate a numerical probability distribution over the possible actions using a policy gradient, while the SR component captures the functional form that relates the associated states with the action probabilities. The SR-generated policy expressions are then utilized through importance sampling to improve the rewards received during the learning process. We have tested the proposed S-REINFORCE algorithm on various dynamic decision-making problems with low and high dimensional action spaces, and the results demonstrate its effectiveness and impact in achieving interpretable solutions. By leveraging the strengths of both NN and SR, S-REINFORCE produces policies that are not only well-performing but also easy to interpret, making it an ideal choice for real-world applications where transparency and causality are crucial.
Decentralized Learning over Wireless Networks: The Effect of Broadcast with Random Access
Authors: Zheng Chen, Martin Dahl, Erik G. Larsson
Subjects: Networking and Internet Architecture (cs.NI); Machine Learning (cs.LG); Systems and Control (eess.SY)
Abstract
In this work, we focus on the communication aspect of decentralized learning, which involves multiple agents training a shared machine learning model using decentralized stochastic gradient descent (D-SGD) over distributed data. In particular, we investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD, considering the broadcast nature of wireless channels and the link dynamics in the communication topology. Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding
Abstract
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings. However, in previous studies, each embedding used for contrastive learning only derived from one sentence instance, and we call these embeddings instance-level embeddings. In other words, each embedding is regarded as a unique class of its own, whichmay hurt the generalization performance. In this study, we propose IS-CSE (instance smoothing contrastive sentence embedding) to smooth the boundaries of embeddings in the feature space. Specifically, we retrieve embeddings from a dynamic memory buffer according to the semantic similarity to get a positive embedding group. Then embeddings in the group are aggregated by a self-attention operation to produce a smoothed instance embedding for further analysis. We evaluate our method on standard semantic text similarity (STS) tasks and achieve an average of 78.30%, 79.47%, 77.73%, and 79.42% Spearman's correlation on the base of BERT-base, BERT-large, RoBERTa-base, and RoBERTa-large respectively, a 2.05%, 1.06%, 1.16% and 0.52% improvement compared to unsup-SimCSE.
Dynamically Conservative Self-Driving Planner for Long-Tail Cases
Authors: Weitao Zhou, Zhong Cao, Nanshan Deng, Xiaoyu Liu, Kun Jiang, Diange Yang
Abstract
Self-driving vehicles (SDVs) are becoming reality but still suffer from "long-tail" challenges during natural driving: the SDVs will continually encounter rare, safety-critical cases that may not be included in the dataset they were trained. Some safety-assurance planners solve this problem by being conservative in all possible cases, which may significantly affect driving mobility. To this end, this work proposes a method to automatically adjust the conservative level according to each case's "long-tail" rate, named dynamically conservative planner (DCP). We first define the "long-tail" rate as an SDV's confidence to pass a driving case. The rate indicates the probability of safe-critical events and is estimated using the statistics bootstrapped method with historical data. Then, a reinforcement learning-based planner is designed to contain candidate policies with different conservative levels. The final policy is optimized based on the estimated "long-tail" rate. In this way, the DCP is designed to automatically adjust to be more conservative in low-confidence "long-tail" cases while keeping efficient otherwise. The DCP is evaluated in the CARLA simulator using driving cases with "long-tail" distributed training data. The results show that the DCP can accurately estimate the "long-tail" rate to identify potential risks. Based on the rate, the DCP automatically avoids potential collisions in "long-tail" cases using conservative decisions while not affecting the average velocity in other typical cases. Thus, the DCP is safer and more efficient than the baselines with fixed conservative levels, e.g., an always conservative planner. This work provides a technique to guarantee SDV's performance in unexpected driving cases without resorting to a global conservative setting, which contributes to solving the "long-tail" problem practically.
PillarAcc: Sparse PointPillars Accelerator for Real-Time Point Cloud 3D Object Detection on Edge Devices
Authors: Minjae Lee, Hyungmin Kim, Seongmin Park, Minyong Yoon, Janghwan Lee, Junwon Choi, Mingu Kang, Jungwook Choi
Abstract
3D object detection using point cloud (PC) data is vital for autonomous driving perception pipelines, where efficient encoding is key to meeting stringent resource and latency requirements. PointPillars, a widely adopted bird's-eye view (BEV) encoding, aggregates 3D point cloud data into 2D pillars for high-accuracy 3D object detection. However, most state-of-the-art methods employing PointPillar overlook the inherent sparsity of pillar encoding, missing opportunities for significant computational reduction. In this study, we propose a groundbreaking algorithm-hardware co-design that accelerates sparse convolution processing and maximizes sparsity utilization in pillar-based 3D object detection networks. We investigate sparsification opportunities using an advanced pillar-pruning method, achieving an optimal balance between accuracy and sparsity. We introduce PillarAcc, a state-of-the-art sparsity support mechanism that enhances sparse pillar convolution through linear complexity input-output mapping generation and conflict-free gather-scatter memory access. Additionally, we propose dataflow optimization techniques, dynamically adjusting the pillar processing schedule for optimal hardware utilization under diverse sparsity operations. We evaluate PillarAcc on various cutting-edge 3D object detection networks and benchmarks, achieving remarkable speedup and energy savings compared to representative edge platforms, demonstrating record-breaking PointPillars speed of 500FPS with minimal compromise in accuracy.
Measuring Progress in Fine-grained Vision-and-Language Understanding
Authors: Emanuele Bugliarello, Laurent Sartran, Aishwarya Agrawal, Lisa Anne Hendricks, Aida Nematzadeh
Subjects: Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
Abstract
While pretraining on large-scale image-text data from the Web has facilitated rapid progress on many vision-and-language (V&L) tasks, recent work has demonstrated that pretrained models lack "fine-grained" understanding, such as the ability to recognise relationships, verbs, and numbers in images. This has resulted in an increased interest in the community to either develop new benchmarks or models for such capabilities. To better understand and quantify progress in this direction, we investigate four competitive V&L models on four fine-grained benchmarks. Through our analysis, we find that X-VLM (Zeng et al., 2022) consistently outperforms other baselines, and that modelling innovations can impact performance more than scaling Web data, which even degrades performance sometimes. Through a deeper investigation of X-VLM, we highlight the importance of both novel losses and rich data sources for learning fine-grained skills. Finally, we inspect training dynamics, and discover that for some tasks, performance peaks early in training or significantly fluctuates, never converging.
Proactive Content Caching Scheme in Urban Vehicular Networks
Abstract
Stream media content caching is a key enabling technology to promote the value chain of future urban vehicular networks. Nevertheless, the high mobility of vehicles, intermittency of information transmissions, high dynamics of user requests, limited caching capacities and extreme complexity of business scenarios pose an enormous challenge to content caching and distribution in vehicular networks. To tackle this problem, this paper aims to design a novel edge-computing-enabled hierarchical cooperative caching framework. Firstly, we profoundly analyze the spatio-temporal correlation between the historical vehicle trajectory of user requests and construct the system model to predict the vehicle trajectory and content popularity, which lays a foundation for mobility-aware content caching and dispatching. Meanwhile, we probe into privacy protection strategies to realize privacy-preserved prediction model. Furthermore, based on trajectory and popular content prediction results, content caching strategy is studied, and adaptive and dynamic resource management schemes are proposed for hierarchical cooperative caching networks. Finally, simulations are provided to verify the superiority of our proposed scheme and algorithms. It shows that the proposed algorithms effectively improve the performance of the considered system in terms of hit ratio and average delay, and narrow the gap to the optimal caching scheme comparing with the traditional schemes.
A Critical View Of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment
Abstract
Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model's capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets.
Keyword: efficient
Rethink Depth Separation with Intra-layer Links
The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation
Theoretical Analyses of Evolutionary Algorithms on Time-Linkage OneMax with General Weights
Complexity of Efficient Outcomes in Binary-Action Polymatrix Games with Implications for Coordination Problems
Active Sensing for Two-Sided Beam Alignment and Reflection Design Using Ping-Pong Pilots
Efficient Coded Multi-Party Computation at Edge Networks
Foundations of Spatial Perception for Robotics: Hierarchical Representations and Real-time Systems
Exploring Zero and Few-shot Techniques for Intent Classification
Local Life: Stay Informed Around You, A Scalable Geoparsing and Geotagging Approach to Serve Local News Worldwide
Towards Understanding and Improving GFlowNet Training
Entropy-split multidimensional summation-by-parts discretization of the Euler and Navier-Stokes equations
Boosting Value Decomposition via Unit-Wise Attentive State Representation for Cooperative Multi-Agent Reinforcement Learning
Model Predictive Control of Smart Districts Participating in Frequency Regulation Market: A Case Study of Using Heating Network Storage
Rethinking k-means from manifold learning perspective
Progressive Material Caching
Parameterized Verification of Disjunctive Timed Networks
An Object SLAM Framework for Association, Mapping, and High-Level Tasks
Multi-Relational Hyperbolic Word Embeddings from Natural Language Definitions
Efficient Search of Comprehensively Robust Neural Architectures via Multi-fidelity Evaluation
Adaptive and Flexible Model-Based AI for Deep Receivers in Dynamic Channels
Multi-Wavelength Transponders for High-capacity Optical Networks: A Physical-layer-aware Network Planning Study
Methods and Tools to Advance the Retrieval of Mathematical Knowledge from Digital Libraries for Search-, Recommendation-, and Assistance-Systems
Do RESTful API Design Rules Have an Impact on the Understandability of Web APIs? A Web-Based Experiment with API Descriptions
Towards Versatile and Efficient Visual Knowledge Injection into Pre-trained Language Models with Cross-Modal Adapters
Optimized Schwarz methods for the time-dependent Stokes-Darcy coupling
Reliability Analysis of Gracefully Degrading Automotive Systems
Design and Development of a Java Parallel I/O Library
Knowledge Soft Integration for Multimodal Recommendation
Dimension results for extremal-generic polynomial systems over complete toric varieties
Optimizing Memory Mapping Using Deep Reinforcement Learning
Automata with Timers
Distributed Twins in Edge Computing: Blockchain and IOTA
Accelerating Statewide Connected Vehicles Big (Sensor Fusion) Data ETL Pipelines on GPUs
A Comprehensive Analysis of Adapter Efficiency
Dynamically Conservative Self-Driving Planner for Long-Tail Cases
AGFormer: Efficient Graph Representation with Anchor-Graph Transformer
PillarAcc: Sparse PointPillars Accelerator for Real-Time Point Cloud 3D Object Detection on Edge Devices
Understanding Automatic Differentiation Pitfalls
Supplementing Gradient-Based Reinforcement Learning with Simple Evolutionary Ideas
Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training
Scalable Coupling of Deep Learning with Logical Reasoning
Efficient Neural Network based Classification and Outlier Detection for Image Moderation using Compressed Sensing and Group Testing
The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma
Keyword: faster
Mem-Rec: Memory Efficient Recommendation System using Alternative Representation
ActUp: Analyzing and Consolidating tSNE and UMAP
Optimizing Memory Mapping Using Deep Reinforcement Learning
A Comprehensive Analysis of Adapter Efficiency
Gallery Sampling for Robust and Fast Face Identification
Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training
Keyword: mobile
Enhanced Hybrid Automatic Repeat Request Scheduling for Non-Terrestrial IoT Networks
Dish detection in food platters: A framework for automated diet logging and nutrition management
Keyword: pruning
Graph Neural Network for Accurate and Low-complexity SAR ATR
Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems
Gallery Sampling for Robust and Fast Face Identification
PillarAcc: Sparse PointPillars Accelerator for Real-Time Point Cloud 3D Object Detection on Edge Devices
Keyword: voxel
Geometric Modeling and Physics Simulation Framework for Building a Digital Twin of Extrusion-based Additive Manufacturing
Keyword: lidar
MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features
Keyword: diffusion
Hawkes Process based on Controlled Differential Equations
On a Voter Model with Context-Dependent Opinion Adoption
$α$-robust error estimates of general non-uniform time-step numerical schemes for reaction-subdiffusion problems
Keyword: dynamic
Hawkes Process based on Controlled Differential Equations
Automated Data Denoising for Recommendation
Theoretical Analyses of Evolutionary Algorithms on Time-Linkage OneMax with General Weights
Dynamic Routing in Stochastic Urban Air Mobility Networks: A Markov Decision Process Approach
Geometric Modeling and Physics Simulation Framework for Building a Digital Twin of Extrusion-based Additive Manufacturing
COLA: Characterizing and Optimizing the Tail Latency for Safe Level-4 Autonomous Vehicle Systems
Simultaneous Modeling of In Vivo and In Vitro Effects of Nondepolarizing Neuromuscular Blocking Drugs
Model Predictive Control of Smart Districts Participating in Frequency Regulation Market: A Case Study of Using Heating Network Storage
Scaling Laws of Dynamic High-Capacity Ride-Sharing
T-former: An Efficient Transformer for Image Inpainting
A Lightweight Authentication Protocol against Modeling Attacks based on a Novel LFSR-APUF
Learning Quadruped Locomotion using Bio-Inspired Neural Networks with Intrinsic Rhythmicity
Adaptive and Flexible Model-Based AI for Deep Receivers in Dynamic Channels
S-REINFORCE: A Neuro-Symbolic Policy Gradient Approach for Interpretable Reinforcement Learning
Decentralized Learning over Wireless Networks: The Effect of Broadcast with Random Access
Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding
Dynamically Conservative Self-Driving Planner for Long-Tail Cases
PillarAcc: Sparse PointPillars Accelerator for Real-Time Point Cloud 3D Object Detection on Edge Devices
Measuring Progress in Fine-grained Vision-and-Language Understanding
Proactive Content Caching Scheme in Urban Vehicular Networks
A Critical View Of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment