Machine Learning

Bayes' theorem

Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the development of algorithms that take as input empirical data (from sensors or databases), identify complex relationships, and employ these identified patterns to make predictions. The algorithm studies a portion of the observed data (called ‘training data’) to capture characteristics of interest. Optical character recognition, in which printed characters are recognized automatically based on previous examples, is a classic engineering example of machine learning.

In 1959, AI pioneer Arthur Samuel defined machine learning as a ‘Field of study that gives computers the ability to learn without being explicitly programmed.’ Computer scientist Tom M. Mitchell provided a widely quoted, more formal definition: ‘A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.’

One fundamental difficulty is that the set of all possible behaviors given all possible inputs is (in most cases of practical interest) too large to be included in the set of observed examples. Hence the learner must generalize from the given examples in order to produce a useful output. Generalization in this context is the ability of an algorithm to perform accurately on new, unseen examples after having trained on a learning data set. The core objective of a learner is to generalize from its experience.

Machine learning (predictions based on known properties, learned from training data) is sometimes confused with data mining, which is the discovery of (previously) unknown properties on the data. The two areas overlap in many ways: data mining uses many machine learning methods, but often with a slightly different goal in mind. On the other hand, machine learning also employs data mining methods as ‘unsupervised learning’ or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which have separate conferences and separate journals) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in data mining the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by supervised methods, while in a typical data mining task, supervised methods cannot be used due to the unavailability of training data.

Some machine learning systems attempt to eliminate the need for human intuition in data analysis, while others adopt a collaborative approach between human and machine. Human intuition cannot, however, be entirely eliminated, since the system’s designer must specify how the data is to be represented and what mechanisms will be used to search for a characterization of the data.

Machine learning algorithms can be organized into a taxonomy based on the desired outcome of the algorithm. ‘Supervised learning’ is the task of inferring a function from labelled training data. ‘Unsupervised learning’ is the problem of trying to find hidden structure in unlabeled data. ‘Semi-supervised learning’ combines both labeled and unlabeled examples to generate an appropriate function or classifier. ‘Reinforcement learning’ learns how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback in the form of rewards that guides the learning algorithm. ‘Transduction’ is reasoning from observed, specific (training) cases to specific (test) cases (by contrast, induction is reasoning from observed training cases to general rules). ‘Learning to learn’ is where an algorithm learns its own inductive bias (the set of assumptions used to predict outputs) based on previous experience.

The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. There are many similarities between machine learning theory and statistics, although they use different terms.

An artificial neural network (ANN) learning algorithm, usually called ‘neural network’ (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables. Genetic programming (GP) is an evolutionary algorithm-based methodology inspired by Darwinian evolution to find computer programs that perform a user-defined task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program’s ability to perform a given computational task.

Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to some predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated for example by internal compactness (similarity between members of the same cluster) and separation between different clusters. Other methods are based on estimated density and graph connectivity. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.

A Bayesian network is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG, a directed graph with no directed cycles). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning.

Reinforcement learning is concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.

Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include principal components analysis and cluster analysis. Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing to reconstruct the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution. Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional (high-dimensional data, meaning data that requires more than two or three dimensions to represent, can be difficult to interpret). Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.

In Sparse Dictionary Learning, a datum is represented as a linear combination of basis functions, and the coefficients are assumed to be sparse. In classification, the problem is to determine which classes a previously unseen datum belongs to. Suppose a dictionary for each class has already been built. Then a new datum is associated with the class such that it’s best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image path can be sparsely represented by an image dictionary, but the noise cannot.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.