Activity

  • Abdi Newell posted an update 3 months, 2 weeks ago

    Also, an adaptive dynamic programming (ADP) policy is proposed to address the zero-sum differential game issue in the optimal neural feedback controller in which the hierarchical neural network is designed to yield solutions of the constrained Hamilton-Jacobi-Isaacs (HJI) equation online. The presented scheme not only ensures uniform ultimate boundedness of closed-loop coupled FO chaotic electromechanical devices and realizes optimal synchronization but also achieves a minimum value of cost function. Simulation results further show the validity of the presented scheme.Learning over massive data stored in different locations is essential in many real-world applications. However, sharing data is full of challenges due to the increasing demands of privacy and security with the growing use of smart mobile devices and Internet of thing (IoT) devices. Federated learning provides a potential solution to privacy-preserving and secure machine learning, by means of jointly training a global model without uploading data distributed on multiple devices to a central server. However, most existing work on federated learning adopts machine learning models with full-precision weights, and almost all these models contain a large number of redundant parameters that do not need to be transmitted to the server, consuming an excessive amount of communication costs. To address this issue, we propose a federated trained ternary quantization (FTTQ) algorithm, which optimizes the quantized networks on the clients through a self-learning quantization factor. Theoretical proofs of the convergence of quantization factors, unbiasedness of FTTQ, as well as a reduced weight divergence are given. On the basis of FTTQ, we propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems. Empirical experiments are conducted to train widely used deep learning models on publicly available data sets, and our results demonstrate that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data in contrast to the canonical federated learning algorithms.In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets (PTC) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pairwise network architecture that can leverage training samples from a source domain and, thus, requires only a few labeled samples per category from the target domain. In particular, a frame self-attention mechanism and an adaptive weight scheme are embedded into the PTC network to adaptively combine the RGB and flow features. This design can effectively learn domain-invariant features for both the source and target domains. In addition, we propose a sphere boundary sample-selecting scheme that selects the training samples at the boundary of a class (in the feature space) to train the PTC model. In this way, a well-enhanced generalization capability can be achieved. To validate the effectiveness of our PTC model, we construct two CDAR data sets (SDAI Action I and SDAI Action II) that include indoor and outdoor environments; all actions and samples in these data sets were carefully collected from public action data sets. To the best of our knowledge, these are the first data sets specifically designed for the CDAR task. Extensive experiments were conducted on these two data sets. The results show that PTC outperforms state-of-the-art video action recognition methods in terms of both accuracy and training efficiency. click here It is noteworthy that when only two labeled training samples per category are used in the SDAI Action I data set, PTC achieves 21.9% and 6.8% improvement in accuracy over two-stream and temporal segment networks models, respectively. As an added contribution, the SDAI Action I and SDAI Action II data sets will be released to facilitate future research on the CDAR task.The core component of most anomaly detectors is a self-supervised model, tasked with modeling patterns included in training samples and detecting unexpected patterns as the anomalies in testing samples. To cope with normal patterns, this model is typically trained with reconstruction constraints. However, the model has the risk of overfitting to training samples and being sensitive to hard normal patterns in the inference phase, which results in irregular responses at normal frames. To address this problem, we formulate anomaly detection as a mutual supervision problem. Due to collaborative training, the complementary information of mutual learning can alleviate the aforementioned problem. Based on this motivation, a SIamese generative network (SIGnet), including two subnetworks with the same architecture, is proposed to simultaneously model the patterns of the forward and backward frames. During training, in addition to traditional constraints on improving the reconstruction performance, a bidirectional consistency loss based on the forward and backward views is designed as the regularization term to improve the generalization ability of the model. Moreover, we introduce a consistency-based evaluation criterion to achieve stable scores at the normal frames, which will benefit detecting anomalies with fluctuant scores in the inference phase. The results on several challenging benchmark data sets demonstrate the effectiveness of our proposed method.Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of source models transfer to other target models and, thus, pose a security threat to black-box applications (when attackers have no access to the target models). Current transfer-based ensemble attacks, however, only consider a limited number of source models to craft an adversarial example and, thus, obtain poor transferability. Besides, recent query-based black-box attacks, which require numerous queries to the target model, not only come under suspicion by the target model but also cause expensive query cost. In this article, we propose a novel transfer-based black-box attack, dubbed serial-minigroup-ensemble-attack (SMGEA). Concretely, SMGEA first divides a large number of pretrained white-box source models into several “minigroups.” For each minigroup, we design three new ensemble strategies to improve the intragroup transferability. Moreover, we propose a new algorithm that recursively accumulates the “long-term” gradient memories of the previous minigroup to the subsequent minigroup.

To Top