**Overview**

We know humans learn from their past experiences. And machines follow instructions given by humans. But, what if humans can train machines to learn from their past data and do what humans can do, but much faster. This phenomenon is known as machine learning.

Ever since computers were invented, people have wondered whether the computer might be able to learn. If we could understand how to program them to learn, the impact could be dramatic. This could open up new uses of computers with new levels of competence and customization. Many algorithms are invented for certain types of tasks that help the machine to learn and perform better. Let’s take an example

Suppose you are home alone. You decide to spend some time on the laptop. You open it and open youtube (let’s keep it clean here). After watching a video, you may like or dislike it. Your liking and disliking will depend on genre, acting, music, characters etc. You may like comedy videos and dislike educational videos. So, now youtube knows your choice. Your new recommendations will be based on your likings and dislikings. There is no one recommending those videos. All of the recommendations are based on your clicks.

So, what machine learning does is that it** learns the data**, builds the** prediction model **and when the new data point comes it can easily **predict it**. More the data, better the model and higher will be the accuracy of the prediction. As our understanding of computers continues to mature, it seems inevitable that machine learning will play an increasingly central role in computer technology.

Speaking in **technical terms**, machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a **subset of Artificial Intelligence**. Machine learning builds mathematical models based on sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning is related to **probability**, **statistics**, **optimization** and c**omputer programming**. Machine Learning is also referred to as predictive analysis.

So, what we need our model to do is to perform accurately on the new and unseen examples after having worked on a **learning data set**. The data set may not have any pattern, so machine learners need to build a general model that produces sufficiently accurate results on the new cases.

In this article we will briefly discuss the **history** of machine learning, t**ypes of algorithms** used in machine learning,** models** used in machine learning and some of the applications of machine learning.

**History**

The term Machine Learning was coined by Arthur Samuel in 1959, an American pioneer in the field of computer gaming and artificial intelligence and stated that “it gives computers the ability to learn without being explicitly programmed”.

And in 1997, Tom Mitchell gave a “well-posed” mathematical and relational definition that “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E”.

So, in order to have a well defined learning problem, we must have various tasks(T), source of experience(E) and performance measure(P). Take an example of robot learning to drive, here

- Task T : Driving a vehicle on the road.
- Training Experience E : Observed human driver and sequence of commands.
- Performance Measure P : Distance traveled without error.

So, if a robot is learning to drive a car, its task is to drive the vehicle on road or highway. It has a set of commands and an observing human driver. By using the set of commands it will start driving the vehicle. For every error, the observing driver provides its input. So, now our robot has the distance it has traveled without error and a new set of instructions by the observer. This helps robots to redefine the model. With time and experience our model will become more robust. In this way a robot can learn to drive. (better said than done)

So, every task will provide a new performance measure, which will be used as an experience for further tasks. Making the whole system requires proper implementation of the algorithms and models, for a better performance of new tasks. Some of the learning algorithms and models are discussed below.

**Learning Algorithms**

There are many ways in which the machine learns. The type of algorithms depend on their approach, the type of data they input and output, and the type of task or problem that they are intended to solve. The type of learning algorithms can be divided into

- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning

Let us get a brief introduction using examples of the above specified learning methods.

**Supervised Learning**

Suppose you have 3 different types of coins say 1c, 2c and 5c. The weight of each of the coins is different. 1c weighs 1 gram, 2c weighs 2 grams and 5c weighs 5 grams. Now you have weight as a feature and currency as a label.

You train your model based on various input coins. Now the trained model is used to classify some new coins as 1c, 2c or 5c. This whole process of prediction based on previous data is known as Supervised Learning.

Supervised learning algorithms build a mathematical model of a set of data that contains both the input and the desired output. This data is known as training data, and consists of a set of training examples. Each training example has one or more input and a desired output. This algorithm learns a function based on the training data. And then the same function is used to predict the output of the new inputs that were not the part of the training data. The accuracy of the function depends on the type of the input data. More the accurate input data, more the accurate function and then the better predictions for the new inputs. The function can be modified over time as soon as the input data flows in and the correct output is known for the particular input.

**Unsupervised Learning**

Unsupervised learning is a technique, where we do not need to supervise the model. This type of learning deals with unlabelled data and uncategorised data. The algorithm acts on the data without prior training and classifying the data into different categories.

Now, suppose you have 3 type of coins, all are of different weight, say 1 gram, 2 grams and 5 grams, but you do not have any label on those coins. But still all the coins can be classified into three types. This classification is described as unsupervised learning. Here the classification of coins has three categories or dimensions, but many complicated problems have more than three dimensions and can be difficult to visualize. So, dimension reduction is carried out in most of these cases. And then the data is partitioned into clusters.

In short, Unsupervised learning is about finding structure hidden in collections of unlabelled data.

**Reinforcement Learning**

Reinforcement learning involves learning what to do in order to maximize a numerical reward. It is a closed loop system, in which the learning system’s action influences its later inputs. The learner is not told which action to take, but it has to discover itself which action yields the most reward by trying them out. The important features that distinguishes this learning from the others are

- Closed loop system in an essential way.
- Not having direct instructions on what actions to take.
- Taking actions based on previous rewards.

It differs from supervised learning in the way that in supervised learning the training data has the particular output so the model is trained with the correct answer itself. But, in reinforcement learning there is no answer, and the learner has to decide what to do to perform the task.

**Models**

Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Some of the models are described below.

**Artificial Neural Networks**

Artificial Neural Network is a computational model based on some simple interconnected elements which process information. It is inspired by the way the biological nervous system works using billions of neurons forming a neural network. It works on three different layers

- Input Layer
- Hidden Layers
- Output Layer

All the inputs are fed into the input layer. These inputs get transferred to the hidden layers and final processed data is available at the output layer. The hidden layer is the collection of neurons which has an activation function working on the data. The weights of the activation function keeps on changing depending on the outputs and the inputs to get the optimized model.

**Decision Trees**

**Support Vector Machines (SVM)**

SVM is a supervised learning model that classifies the new data according to the hyperplane created by the sample data. In other words, given the labelled data, the algorithm outputs an hyperplane that categories new data.

Suppose you are given two types of data as shown in the figure. Now we can draw several lines separating the new data. The optimal line is chosen based on the data.