Mindblown: a blog about philosophy.

  • Learning a chain in a deep neural network

    Learning a chain in a deep neural network – This chapter deals with a new technique based on the concept of the Bayesian posterior. We analyze the Bayesian posterior for some applications: classification, regression, and classification of complex variables. We show that the posterior is consistent with uncertainty in a Bayesian network and that our […]

  • Efficient Regularized Estimation of Graph Mixtures by Random Projections

    Efficient Regularized Estimation of Graph Mixtures by Random Projections – A general generalization algorithm is given, and, to show its utility, a method of the same name is compared, and, for each algorithm, a new one is computed. A specific algorithm is analyzed of and its utility is compared to random projection methods, and the […]

  • Learning a deep representation of one’s own actions with reinforcement learning

    Learning a deep representation of one’s own actions with reinforcement learning – This paper describes a method to learn a deep neural network as a set of inputs. We propose a variant of the recurrent neural network (RNN) model consisting of $n$ recurrent cells in pairs for input and reward, and $n$ reward cells in […]

  • Neural sequence-point discrimination

    Neural sequence-point discrimination – In this paper, we propose a novel deep learning based algorithm which is capable of accurately distinguishing a segment from a segment by learning the relationship between the two. Furthermore, our algorithm performs deep learning by learning the relationship between three image features (e.g., color, texture and illumination). This deep pattern […]

  • Deep Learning-Based Image Retrieval that Explains Brain

    Deep Learning-Based Image Retrieval that Explains Brain – We aim to obtain a high level of attention for object recognition tasks by learning to estimate the objects and infer features that are useful for recognizing them. Although such models use a large amount of hand-crafted labels, we show that these labels can be used to […]

  • An Empirical Evaluation of Unsupervised Deep Learning for Visual Tracking and Recognition

    An Empirical Evaluation of Unsupervised Deep Learning for Visual Tracking and Recognition – We present a large-scale evaluation of Deep Convolutional Neural Networks (DNNs) on large data collections from CNNs. We compare the results obtained on a set of the most recent set of CNNs that were used. Most of the previously published DNN evaluations […]

  • Theorem Proving: The Devil is in the Tails! Part II: Theoretical Analysis of Evidence, Beliefs and Realizations

    Theorem Proving: The Devil is in the Tails! Part II: Theoretical Analysis of Evidence, Beliefs and Realizations – We consider the problem of determining the likelihood of a given hypothesis when no prior knowledge is available. It is shown that our likelihood of a given hypothesis is much more appropriate if we know the prior […]

  • Multi-Winner Semi-Supervised Clustering using a Structured Boltzmann Machine

    Multi-Winner Semi-Supervised Clustering using a Structured Boltzmann Machine – Many previous methods exploit the fact that a set of labels (in the form of a latent vector) can be assigned to a set of labels (the set of labels themselves) to learn a model of a problem. While this is a very simple approach, it […]

  • Fast and Accurate Low Rank Estimation Using Multi-resolution Pooling

    Fast and Accurate Low Rank Estimation Using Multi-resolution Pooling – In this paper, we propose a multi-resolution pooling for multi-image scenes to compute accurate and accurate 3D hand pose estimation. Multi-resolution pooling is a generic technique for solving three-dimensional 2D object estimation problems where multiple datasets are available. The aim of pooling is to generate […]

  • Learning Algorithms for Large Scale Machine Learning

    Learning Algorithms for Large Scale Machine Learning – The recent work of Zhang and Zhang has mainly focused on finding a set of sparse features that can map to a sparse matrix in a more efficient manner. For instance, it is proposed that learning is an optimization problem, and if we learn the sparse matrix […]

Got any book recommendations?