In the real world, we keep learning every day and continue improving upon our decision making abilities by considering success rate of our successful past decisions. Isnt this Human Learning? Now, if we automate this process to perfection using computers, what would it be? Yes, you are right. It would be Machine Learning.
We use it a dozen times every day without even knowing it. Every time you do a web search on Google, their machine learning software figures out how to rank what pages you wish to access. When Facebook's photo application recognizes you in your friends pictures, that's also machine learning. Whenever you read your email from the inbox, an intelligent spam filter saves you from having to wade through tons of spam mails.
Two Major Definitions of Machine Learning
Arthur Samuel defined Machine Learning as the field of study that gives computers the ability to learn without being explicitly programmed.
A very recent articulation by Tom Mitchell, defines Machine Learning as a computer program that is said to learn from Experience E, with respect to some Task T, and some Performance Measure P, if its performance on T as measured by P improves with experience E.
Lets take an example of the online Game of Chess where, every game is a Task T with the playing process as an Experience E. Each game will have a final outcome and that is the Performance Measure P. This performance measure will be the probability factor of winning the next game against some new opponent. Now, because the computer has the patience to play tens of thousands of games all by itself, it can just keep improving on its possibilities of the winning streak.
Technical Aspects in Machine Learning : A Brief Look
We all know how computers are being used for handling and simplifying a host of activities. We are now trying to make the computer learn every macro & micro levels of the human thought processes that leads to multi-variant decision making abilities. By building truly intelligent machines, we can do just about anything that you or I can do. This is done through learning algorithms calledneural networks, which mimic how the human brain works. They comprise of comprehensive algorithms & characteristic applications of artificial intelligence working on expert systems. With every activity performed by it, the Computer is taught to self-learn and auto-upgrade itself with decision making capabilities that are progressive, sharp & accurate. Computers learn to remember & process the past experience - based on archived tasks and resultant performance levels. They then predict the most likely occurrence of activities/results in a specific context or situation.
Major Applications of Machine Learning
Primary Characteristics of Machine Learning Algorithms
Although there are several different characteristics of Machine Learning Algorithms, the most important ones are Supervised Learning and Unsupervised Learning.
In Supervised Learning, the idea is to teach the computer about how to do something based on a certain set of available data that looks related and uniform. A simple example for Supervised Learning is the prediction of housing prices, based on the possession of several data sets of sold houses in a specific area. Now, just suppose that you need to find the price of a 1250 sq.ft house in a certain locality, based on available data of previously sold houses. So I draw a graph like this with the data that I already have
In this graph, I have plotted area (in sq.ft) on x-axis and corresponding pricing (in Rs. 10k) on the y-axis. The X markings in RED represents sold houses. The GREEN line shows our query regarding the possible pricing of a 1250 sq.ft house. It touches the PINK line graph marked as 1 and also touches the BLUE line graph at a place marked 2. You can thus see 2 possibilities. The point marked as 1 lies on a straight line (drawn using math functions for linear equations) and is based on average values of the plotted houses price & area. So the predicted price should be around Rs. 25,00,000. However, the point marked 2 lies on the graph (blue color) that has been drawn using the Regression method (or the continuous value method via quadratic equations) whereby you can see that the price of the house gets to be more accurate (in this case its lesser than the previous prediction), somewhere around Rs. 20,00, 000.
This graph is drawn using math functions (from Octave, Matlab, Python etc) by passing an iteration of each of the coordinates or data-set-values (plotted data of the sold houses, here Area & Pricing) into a quadratic function (or a second-order polynomial). The subsequent computational result for each data record is a value that needs to be plotted. However, before plotting it onto the graph, this value is further corrected or optimized using the gradient descent algorithm for least variance from its previous or next values. This is to ensure that the finally plotted graph attains its best streamlined flow with help of these nano tweaks.
Note: A line drawn between just 2 subsequent coordinates, could involve hundreds or thousands of computations for determining the best optimized value.
In Unsupervised Learning we're going to let the machine learn by itself with a given set of data that seems cluttered & unorganized. One of the algorithms used in this self learning process is the Clustering Algorithm. One example where clustering is used is in Google News. When you type in SPORTS, google goes and looks at tens of hundreds of thousands of new stories on the web and it groups or CLUSTERS them into cohesive news stories based on Sports and sorted by the latest date. In unsupervised learning based on the prediction results, there is the least possibility of a feedback.
Data Cluster is used in several applications like organizing large Computer Clusters in data centers where specific computers work together more efficiently.
They are used in Social Network Analysis like applications where knowledge about which friends you email the most is used to predict your favored friends list. They are used in Market Segmentation where many companies having huge database of customer information data, automatically group their customers into different market segments for efficient selling or marketing activities. The list just goes on and on.