An introduction to Artificial Intelligence, Machine Learning & Deep Learning.

Millenia before the start of the Common Era, before the age of computers or the idea of the internet was even the faintest glimmer of twinkle in an eye, the Ancient Greeks had a myth regarding a mechanical man. 

Talos was a giant bronze automaton with the likeness of human that was tasked with guarding the island of Crete from invaders. It could do anything a human could do, and better, since its task involved hurling massive boulders at unwanted visitors attempting to make land on the island.

As such, it could be said that the idea of artificial intelligence has been a part—perhaps, one could even say, a collective dream—of humanity for thousands of years.

Businesses and careers will undoubtedly be shaped by the continued implementation of artificial intelligence technology, and already are. Thus, such technologies, and the countless debates that rage on surrounding them offer glimpses into a vastly different future for us and our planet—some optimistic and some dystopian.

So, we thought that it would probably be a good idea to break down the concept, before we all get left in the radioactive dust of the machine-human war—just joking, of course.

Artificial Intelligence: Task masters

What is AI?

Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) seem to be hot topics these days, and while all are part of the same broad category known as “Artificial Intelligence”, often these terms are used interchangeably and incorrectly.

Often, when we think of AI, we can’t help be informed by the multiple decades of Science-Fiction and pop culture that have influenced our imaginations. The root of what all these fictional dystopian visions have in common is the far more utilitarian and optimistic idea of machines that can smartly perceive, learn, reason, and solve problems based in the world around them.

In other words, AI refers to machines that can accomplish tasks in an “intelligent” manner. 

Intelligence, in this case, doesn’t necessarily mean “thinking”—what it means is more along the lines of a computer having the ability to perceive data, “choose” a best course of action from a range of other possible actions and outcomes, and formulate a plan in order to accomplish a goal. In other words, what makes AI intelligent is its ability to analyze data and intelligently choose the correct outcome from a range of other, often just as logical, strategies—and to do so thousands of times faster than a human brain, and tirelessly.

These intelligent computers are coded by computer scientists to perform a specific function; but, more than just being programmed to do a single repetitive process, they accomplish their function by taking in data—sensory or digital—and analyzing and making sense of that data. This is what is termed ‘Applied AI’, i.e. AI that has specific applications.

Already we have the beginnings of AI like this in our daily lives, be they Virtual Assistants like Apple’s Siri, Amazon’s Alexa or Google Assistant. Modern cars with Driver Assistance use AI to analyze visual inputs from cameras to plot the best course of action for parking or notify you when you have crossed a solid white line. We are also seeing the use of AI in the medical field, as computers are now being used to study diseases and formulate better, more focused drugs to treat illnesses.

Distinct from applied AI is what is known as ‘General AI’, or artificial general intelligence (AGI), which is, simply, a machine or computer programme that can do more than just one specific task and perform any intellectual task that a human being can. Picture, for example an AI that uses the same sensors it uses to track safe driving to also use those images for multiple other purposes—like giving you real time directions or information about a good coffee shop on the corner. 

This would thus be a more generalized AI system, and would need a far more sophisticated and efficient means of teaching AI how to have a generalized knowledge and skill set to do various tasks.

Machine Learning: Computers going to school

Machine learning teaches machines how to learn by themselves.

Enter Machine Learning (ML), a state-of-the-art subfield—or deepening of the concept, however you’d like to look at it—of the broad category of AI.

Arthur Samuel coined the phrase “Machine Learning” in 1959 in order to define the concept of machines that learn on their own without needing to be programmed.

ML refers to computers that accomplish tasks intelligently, but, rather than being programmed by coders to do a specific task—or are given all the definitions and parameters of their task/s beforehand—instead, they are actively learning and teaching themselves what to do and what results are better than others. 

In essence, ML describes a self-programming computer that automatically learns without needing to be explicitly programmed or taught. Instead, it is more efficient for machines to be taught how to learn—by learning from examples, experience and experimentation and then building models of how particular interactions work, without being specifically programmed with rules and models beforehand. 

One way ML can be explained is by imagining that instead of computers being told or shown exactly how to do something, they could be shown how not to do something, and from there ML algorithms can deduce how to actually accomplish the task in question.

By essentially making a line of best fit between a bunch of data and their outcomes, ML computers teach themselves what works from what doesn’t and what works better than others.

There are three types of Machine Learning:

  1. Supervised Learning gives a machine a problem as well as the answer to the problem, training it, and then when given an input problem it will know how to get an accurate output solution.
  2. Unsupervised Learning is when a computer is given a bunch of unlabelled data and told to find similarities between the data, which it then uses to compile a working model of the system to be used later.
  3. Reinforcement Learning describes a method in which a computer is given a certain set of data and the output is then fed back into the system to further deduce similarities and differences and learn if its previous conclusions function correctly with new data sets. The basic idea revolves around reward-and-punishment systems centered around whether an answer is correct or not, and will maximize “behavior” to receive rewards in the short-term or long-term.

Following from ML, computer scientists theorized that rather coding AI to learn how to learn, it would be far more efficient to code them to think, learn, and organize information more like human beings and give it access to all the information in the world: the Internet.

Deep Learning: Thinking like a human

Deep Learning allows AI to learn like humans do by mimicking how the human brain functions.

Deep Learning is a subfield of Machine Learning (which is, in turn, a subfield of AI) which uses multiple algorithms and nodes that are designed to closely mimic how the human brain works:

an interconnected network of neurons.

The human brain—which is a powerful learning computer in its own right with organic technology that has been evolving for hundreds of millions of years—is fantastic at multitasking.

Just think of the complex physical and cognitive processes that all seem to run simultaneously and discreetly. Think of all the sensory functions and tasks the brain has to filter and manage just to read this article: your brain is managing your essential bodily processes while also perceiving the letters, recognizing them as letters, ordering them into words, ordering them individually and in context, before finally having to make sense of them—all while you likely have other thoughts in your mind at the same time.

Deep Learning, in essence, works by using various layers and nodes of computers and processes that artificially mimic how our brains function, thus increasing a computer’s ability to learn and recall information.

Having various nodes and layers mean that there are multiple layers that process different information at the same time, which is then fed in one direction and on to the next layer only if the data’s ‘weight’ surpasses a certain threshold.

The invention of the internet means that there is now an explosion of data and storage—data on a scale that was not available before; data at the ready that can be used by AI to learn and grow. The advantage of computers having access to all this information is that they are far faster, more accurate and devoid of socially-ingrained biases.

Ultimately, the goal of AI is to build computers that don’t necessarily think better than humans, just think more like humans—and more accurately.

We don’t aim to make AI that imagines or feels, but rather to mimic the human brain in a way that can make sense of multiple different sensory stimuli simultaneously and accomplish multiple tasks at once intelligently.

Everything from education, medicine, business and the sciences can benefit from AI—some already are—but, at the same time, there is certainly the possibility of AI being used for evil and injustice. For that reason Google and many other scientists have made their warnings clear about AI: that humanity has a responsibility to use AI for the benefit of humanity and not use that technology as tools for invasive surveillance, the spread of misinformation or to be used on weapons that could end lives.

While artificial superintelligence is a long way off from creating a singularity, i.e. the unknown event horizon when an AI creates its own AI system—-an event that is impossible to predict or invision because we simply don’t know what it will look like or how it will work—but, AI has the potential to greatly improve and impact everything from every industry to humanity’s day-to-day life.

This has been a dream ever since the Ancient Greeks and the myth of Talos. And if the future of AI means the creation of giant robots, then sign me up!

You can find out more about Digital Cabinet at

Leave a Reply

Your email address will not be published. Required fields are marked *