Machine learning uses mathematical algorithms to learn and analyze data to make predictions and take decisions in the future.
Today, Machine learning algorithms enable computers to communicate with humans autonomously drive cars write and publish match reports predict natural disasters of fine terrorist suspects.
Machine learning has been one of the most commonly heard buzzwords in the recent past. So let’s jump in and examine the origins of Machine learning and some of its recent milestones.
Origins of Machine learning
The concept of Machine learning came into picture in 1950 when Alan Turing, A pioneering computer scientist published an article answering the question Can Machines think.
He proposed a hypothesis stating that Machines that succeeded in convincing humans that it is not indeed a Machine would have achieved Artificial intelligence?
This was called the Turing test, In 1957, Frank Rosenblatt designed the first neural network for computers, now commonly called the perceptron model.
The perceptron algorithm was designed to classify visual inputs, categorizing subjects into one of the two groups.
In 1959, Bernard Vidro and Marcin Hawk created two neural network models called adalind that could detect binary patterns, and madilyn that could eliminate echo on phone lines.
The ladder had a real world application. In 1967, the nearest neighbor algorithm was written that later allowed computers to use very basic pattern recognition.
General de Jong in 1981 introduced the concept of explanation based learning in which a computer analyzes data and creates general rules to discard unimportant information.
During the 1990s work on Machine learning shifted from a knowledge driven approach to a more data driven approach. scientists began creating programs for computers to analyze large amounts of data and draw conclusions or learn from the results.
Now let’s talk about some of the recent achievements in this field in 2002, using a combination of Machine learnings, natural language processing, and information retrieval techniques, IBM’s Watson Beat two human champions in a game of jeopardy.
In 2016, Google’s AlphaGo program became the first computer program to beat a professional human using a combination of Machine learning and research techniques.
Since the start of the 21st century, many businesses ventured into creating Machine learnings projects
- Google brain
- Alex Smith
- Deep phase
- Deep mind
- Open AI
- Amazon Machine learnings platform
are some large scale projects taken up by Tokyo companies, Amazon, Netflix, Google, Salesforce and IBM are dominating the IT industry quit Machine learning.
Machine learning has scaled exponentially in recent decades. As the quantities of data we produce continue to grow, so will our computer’s ability to process and analyze it.
So that is all for this episode of flashback Friday. Like and share the video if you find this interesting. We will be back with another technology in the next episode. Until then, keep learning and stay tuned to simply learn.