self-superv search results




Showing 20 out of 27 articles for self-superv
arxiv.org | Yesterday
Summary:
Accurate ADMET (an abbreviation for "absorption, distribution, metabolism, excretion, and toxicity") predictions can efficiently screen out undesirable drug candidates in the early stage of drug discovery. In recent years, multiple comprehensive ADMET systems that adopt advanced machine learning models have been developed, providing services to estimate multiple endpoints. However, those ADMET systems usually suffer from weak extrapolation ability. First, due to the lack of labelled data for eac...


Keywords: machine learning, self-supervised

arxiv.org | Yesterday
Summary:
Self-supervised learning (SSL) methods have proven to be very successful in automatic speech recognition (ASR). These great improvements have been reported mostly based on highly curated datasets such as LibriSpeech for non-streaming End-to-End ASR models. However, the pivotal characteristics of SSL is to be utilized for any untranscribed audio data. In this paper, we provide a full exploration on how to utilize uncurated audio data in SSL from data pre-processing to deploying an streaming hybri...


Keywords: self-supervised, supervised learning, speech recognition

towardsdatascience.com | Yesterday
Summary:
Code demo for text classification with two of the popular pre trained hugging face modelsImage byauthorIntroductionI personally do believe all the fancy ML research and advanced AI algorithm works have very minimal value if not zero until the date wh...


Keywords: transformer, analysis, framework, python, pre-trained

arxiv.org | Yesterday
Summary:
Recent research on style transfer takes inspiration from unsupervised neural machine translation (UNMT), learning from large amounts of non-parallel data by exploiting cycle consistency loss, back-translation, and denoising autoencoders. By contrast, the use of self-supervised NMT (SSNMT), which leverages (near) parallel instances hidden in non-parallel data more efficiently than UNMT, has not yet been explored for style transfer. In this paper we present a novel Self-Supervised Style Transfer (...


Keywords: self-supervised

arxiv.org | Yesterday
Summary:
Recently, anomaly detection and localization in multimedia data have received significant attention among the machine learning community. In real-world applications such as medical diagnosis and industrial defect detection, anomalies only present in a fraction of the images. To extend the reconstruction-based anomaly detection architecture to the localized anomalies, we propose a self-supervised learning approach through random masking and then restoring, named Self-Supervised Masking (SSM) for ...


Keywords: machine learning, self-supervised, supervised learning

sh-tsang.medium.com | Today
Summary:
S x2074 L Combines Self Supervised Approach and Semi Supervised ApproachContinue reading on Medium...


Keywords: self-supervised, supervised learning

arxiv.org | Yesterday
Summary:
Generalizing learned representations across significantly different visual domains is a fundamental yet crucial ability of the human visual system. While recent self-supervised learning methods have achieved good performances with evaluation set on the same domain as the training set, they will have an undesirable performance decrease when tested on a different domain. Therefore, the self-supervised learning from multiple domains task is proposed to learn domain-invariant features that are not o...


Keywords: test, visual, self-supervised, supervised learning

arxiv.org | Yesterday
Summary:
The prior self-supervised learning researches mainly select image-level instance discrimination as pretext task. It achieves a fantastic classification performance that is comparable to supervised learning methods. However, with degraded transfer performance on downstream tasks such as object detection. To bridge the performance gap, we propose a novel object-level self-supervised learning method, called Contrastive learning with Downstream background invariance (CoDo). The pretext task is conve...


Keywords: classification, object detection, contrastive, self-supervised,

analyticsindiamag.com | Yesterday
Summary:
Gato is multi modal, multi task, multi embodiment generalist policy.The post DeepMinds Gato is the Swiss Army Knife of AI models appeared first on Analytics India Magazine....


Keywords: glue, network, self-supervised, ai

arxiv.org | Yesterday
Summary:
As an important and challenging problem, few-shot image generation aims at generating realistic images through training a GAN model given few samples. A typical solution for few-shot generation is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain. In this paper, we propose a novel self-supervised transfer scheme termed D3T-GAN, addressing the cross-domain GANs transfer in few-shot image generation. Specifically, we design two individual strat...


Keywords: design, few-shot, self-supervised

paperswithcode.com | Today
Summary:
Recent research on style transfer takes inspiration from unsupervised neural machine translation UNMT , learning from large amounts of non parallel data by exploiting cycle consistency loss, back translation, and denoising autoencoders. Code...


Keywords: self-supervised

paperswithcode.com | Today
Summary:
Nowadays, deep neural networks outperform humans in many tasks. Code...


Keywords: test, prototype, network, neural network

medium.com | Yesterday
Summary:
Self-supervised learning is a way of training a deep learning model with labels inherently obtained from the data itself. Initially a generative based modelling including Auto-encoders, GANs were used to achieve this but they failed in achieving on-par results when compared to supervised training. Recent developments in contrastive learning has improved the results a lot with some papers discussed below like Swav (Facebook), SimCLR (Google) reaching SOTA on image classification tasks on ImageNet dateset....


Keywords: deep learning, imagenet, classification, self-supervised,

arxiv.org | Yesterday
Summary:
Based on digital whole slide scanning technique, artificial intelligence algorithms represented by deep learning have achieved remarkable results in the field of computational pathology. Compared with other medical images such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI), pathological images are more difficult to annotate, thus there is an extreme lack of data sets that can be used for supervised learning. In this study, a self-supervised learning (SSL) model, Global Contrast ...


Keywords: algorithms, artificial intelligence, self-supervised, supervised

arxiv.org | Yesterday
Summary:
Self-supervised category-agnostic segmentation of real-world images into objects is a challenging open problem in computer vision. Here, we show how to learn static grouping priors from motion self-supervision, building on the cognitive science notion of Spelke Objects: groupings of stuff that move together. We introduce Excitatory-Inhibitory Segment Extraction Network (EISEN), which learns from optical flow estimates to extract pairwise affinity graphs for static scenes. EISEN then produces seg...


Keywords: computer vision, network

arxiv.org | Yesterday
Summary:
Fully connected deep neural networks (DNN) often include redundant weights leading to overfitting and high memory requirements. Additionally, the performance of DNN is often challenged by traditional machine learning models in tabular data classification. In this paper, we propose periodical perturbations (prune and regrow) of DNN weights, especially at the self-supervised pre-training stage of deep autoencoders. The proposed weight perturbation strategy outperforms dropout learning in four out ...


Keywords: classification, machine learning, self-supervised, neural

arxiv.org | Yesterday
Summary:
In person re-identification (ReID), very recent researches have validated pre-training the models on unlabelled person images is much better than on ImageNet. However, these researches directly apply the existing self-supervised learning (SSL) methods designed for image classification to ReID without any adaption in the framework. These SSL methods match the outputs of local views (e.g., red T-shirt, blue shorts) to those of the global views at the same time, losing lots of details. In this pape...


Keywords: framework, classification, self-supervised, imagenet, tpu

arxiv.org | Yesterday
Summary:
Nowadays, deep neural networks outperform humans in many tasks. However, if the input distribution drifts away from the one used in training, their performance drops significantly. Recently published research has shown that adapting the model parameters to the test sample can mitigate this performance degradation. In this paper, we therefore propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples. Using the provided protot...


Keywords: test, self-supervised, neural network, network

arxiv.org | Yesterday
Summary:
Dual embodied-symbolic concept representations are the foundation for deep learning and symbolic AI integration. We discuss the use of dual embodied-symbolic concept representations for molecular graph representation learning, specifically with exemplar-based contrastive self-supervised learning (SSL). The embodied representations are learned from molecular graphs, and the symbolic representations are learned from the corresponding Chemical knowledge graph (KG). We use the Chemical KG to enhance...


Keywords: contrastive, self-supervised, supervised learning, deep

arxiv.org | Yesterday
Summary:
In this paper, we present a self-supervised learning framework for continually learning representations for new sound classes. The proposed system relies on a continually trained neural encoder that is trained with similarity-based learning objectives without using labels. We show that representations learned with the proposed method generalize better and are less susceptible to catastrophic forgetting than fully-supervised approaches. Remarkably, our technique does not store past data or models...


Keywords: framework, self-supervised, supervised learning


Please log in to see more search results.