Putting ‘learning’ into ‘machine’ (or that was the original plan)

Lucia Rodriguez
6 min readJun 26, 2019

The tech world can be challenging, and it’s harder if you’re outside of this field. But nowadays almost every human activity has some degree of technology implementation, from how your phone works, your car, your appointment with your doctor. Until the last century, technology was mainly used to run the world. Before that, machines strictly did whatever was programmed for them, with no other possibilities: in other words, no modifications by itself on its program to face new tasks. But in 1959, an engineer on IBM thought about a never seen before idea: machines can learn. Samuel Arthur’s words in some IBM documents were “Machine Learning: Field of study that gives computers the ability to learn without being explicitly programmed.”

At that time people used to saw the future like this:

From the series Closer than you think | Source
From the series Closer than you think | Source

So the expectations about this kind of ideas were high, people expected that this kind of vision worked like magic. Even if we now have programs able to defeat world-class chess players, saying that it has been that easy would be a lie. First, we have to scheme the steps to follow — this is an algorithm- and after that, we need to implement these steps. None of these processes were especially easy, lots of programmers and engineers have to build these algorithms and test them. After that we can understand a second -and more precise- definition of machine learning, courtesy of Tom Mitchell, a researcher on this matter:

Well posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.

What is Machine Learning? A simple definition

Machine learning (ML) is a continuously developing field of computer science in which machines take some data input and produces some output according to some parameters, all based on statistical techniques. We can find ML everywhere these days. When you search in Google for an image, when Netflix recommends a movie, when you get an email with special offers from your favorite store, behind all of these are decisions based on machine learning algorithms.

Experience is everything

Although sometimes it’s beautiful when we learn things in the fast and easy way, we need to remember that one of our main source of knowledge has been the trial and error method. So, it’s not an exception when we talk about ML: to get a more efficient algorithm, it is necessary to test it over and over and over again and learn from this process. This is, taking some decisions related to the algorithm code itself, given the output.

But, what kind of experience are we talking about?

In ML we find three main types of learning. The supervised learning feeds the algorithm with the input and its desired output, so the algorithm will compare this desired output with the actual one in order to modify its primary model. This kind of learning is really useful when you need to predict events using historical data. Think about weather prediction or the spam filters in our emails.

Unsupervised learning is a thrilling possibility: “Take this data and found anything it has in common”. Sometimes our vision can be very limiting and this is the reason why this kind of learning is really cool: you’ll find patterns that you’re not able to see after and maybe these patterns can be useful in some way. This is the way how anomaly detection works: think about how inventive people can get scheming frauds, for example. Every day some new kind of fraud is in development and its creators want to be as more discrete as possible. Banks can’t allow this kind of actions so they need to check carefully, but it’s really difficult to do this. Fortunately, banks can use ML to find weird patterns in transactions and take actions against fraud.

Reinforcement learning, on the other side, seems more similar to learning, but it’s more like training a pet. If you use a rewards and punishment system, you’ll be able to train your dog, even your cat! Something similar when we talk about this kind of learning: we can instruct the machine to make decisions that allow maximizing a given reward.

But, why all of these?

Simple, ML primary objective is building models, efficient models to analyze data in a proper way. In order to do so, we can find some approaches and techniques.

Finding relationships between an x and y variables is all about ML. For this, we have correlation and regression: analysis procedures or approaches to understand the behavior of these variables and how one of them can affect the other one.

On the other hand, we can find some interesting techniques. The k-nearest neighbor, for example, finds what kind of data is a given one based on its neighbors. It establishes that the most common data around the unknown one is probably its data type. The decision tree is another technique to attend decision-making scenarios and can be used as a predictive model. Deep learning, on the other side, is fascinating: it is based on how eyes and ears work along with the brain in order to produce sight and hearing. Deep learning is based on neural networks, a kind of algorithms whose behavior resembles human neurons. Neural networks have input neurons which output serves as input to another one. In this way deep learning has been modeled: it’s a cascade of non-linear processing neural network, kind of layers, which output serves as input to the next layer. These layers can transform, extract or represent some features of the data.

How it works

ML can be pretty complex. In fact, you can find loads of jokes about how to find and handle ML algorithms. But how to handle them, actually? Well, there’s a set of options to deal with them. Languages like Python, Java, R or C++ have their own set of tools to deal with data using ML.

Is it true that machines will take human’s jobs?

Not necessarily. It’s true that machines have been doing easier our jobs and even if this kind of fear is understandable, it’s important to remember no matter how grandiose ML can seem, at this moment this kind of technology totally relies upon ourselves, no matter how independent the ML can be.

Think about biases. Even with the cold logical way of making decisions, a machine can’t avoid biases if its algorithms were modeled including it in the first place. Somewhere, I read that some time ago, an American couple was having fun with Apple’s Siri. While his voice was fully recognized, her Latin accent couldn’t be handled by Siri at the time. Yes, this is a good example of bias running on this kind of technology. Today we can find that software engineers work hard to overcome this kind of bias, but maybe programming something to overcome it is the easy way. It’s hard to understand our own biases but it’s even more difficult to understand how harsh can be the output of an algorithm modeled with bias. It can actually hurt people or their life quality. Even if almost all its critical processes are fully automatized, human actions in ML are pivotal even now. We don’t get replaces by machines that easily. ML can be very powerful but it needs us on the same degree.

Sources:

https://www.sap.com/latinamerica/products/leonardo/machine-learning/what-is-machine-learning.html

https://carloszr.com/podcast-informatica-brecha-digital/#0121506_Que_es_Machine_Learning

https://www.youtube.com/watch?v=7ClLKBUvmRk&t=74s

--

--