Just Like Child’s Play

The Basics Of Artificial Intelligence (AI)

Mec Rawlings on Unsplash.com

Perhaps, Artificial Intelligence will go down in history as one of the most misunderstood technologies of our time. Hearing Elon Musk and Mark Zuckerberg talk about it reminds me of the great rivalry between Nikola Tesla and Thomas Edison. At the beginning of any profound technological change, the ideas of legends are like an imperfect vision of the future. That is what AI is today.

On one hand, there are people predicting machines taking over humanity. On the other, there are people who do not know AI exists because they literally live in a dark world-a world without electricity.

However, what is true is that humans are walking factories of data generating exabytes of inputs for machines (through sensors and microchips). This data is already being used, to improve AI algorithms, by tech giants such as Google, Facebook and Apple to search for places, to recognize faces and to help using virtual assistants such as Siri.Thus, AI is advancing using machine learning.

What nobody knows for certain is if and when machines will reach superhuman intelligence.

While there is no pragmatic reason to think about evil machines as a near probability, it is one of the outcomes. At the end, what is certain is that we have to keep abreast of advances in the field of AI. That is the goal of this article- to layout the basics of AI, explain the state of the art today and provide helpful resources for beginners to engage with AI.

Does Curiosity Always Kill The Cat?

Humans are curious animals. Our knowledge has been codified as science. Much of that knowledge and it’s application as technology later is an attempt to recreate the world around us. When we gazed upon the pyramids, we had a desire to recreate the wonders created by our ancestors. That quest has never really taken a pause. From CRISPR gene editing technology to industrial robots and autonomous vehicles, man has tried to reinvent the world in his image almost like Prometheus stealing fire from the gods. Only this time, we are stealing nature’s best kept secrets to become more powerful.

AI is an important milestone in that never ending quest. With any exponential technology capable of changing our collective lives, we have a responsibility to make it work in our interests. Just like teaching a child to become a good human when he grows up.

Early Origins

Pamela McCorduck (born 1940), an American author who has authored several books on thinking machines once wrote AI began with “an ancient wish to forge the gods.” These words offer a philosophical explanation as to why humans began researching AI to begin with. AI is man’s at attempt to recreate the human brain. A brief timeline in the attempt to create artificial intelligence shows growth in spurts with the latest spurt characterized by Machine Learning (ML) algorithms.

Source: digitalintelligencetoday.com

Just like any other emerging technology, AI has had it’s ‘AI Winter’ which was characterized by reduced interest in the technology because of increasing disappointment with the then existing technologies followed by reduced funding for research. There have been conjectures of a second AI winter in the future.

Many factors contributed to the slowdown in the development of AI including shifting institutional objectives from a research perspective, hype and exaggeration about what AI would deliver, lack of adequate computing resources

Hubert Dreyfus, drawing heavily from European philosophers like Heidegger and Merleau-Ponty, fundamentally disagreed with AI proponents that AI would ever achieve super human intelligence. Much like the Musk-Zuckerberg debate. Proponents of AI summed up their stance using Alan Turing’s words:

“we cannot so easily convince ourselves of the absence of complete laws of behaviour … The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, ‘We have searched enough. There are no such laws.’”

Similar to the conditions prevalent in the first AI winter, the hype surrounding Machine Learning in specific and AI in general could lead to inflated expectations and exaggeration of what AI is capable of achieving. However, the economic rationale (ageing population in advanced countries, productivity growth focused almost exclusively on achieving advancements along the technology frontier and slowing down of global growth that puts pressure on revenue growth and profitability) underpinning the need for more intelligent AI is ever expanding.

In other words, AI is here to stay.

Neural Nets

In some ways, Neural Nets represent the computer version of neurons in the human brain and their history can be termed as the history of AI itself. The initial neural nets called “the Electronic Brain” were developed in 1943. Today, the work of pioneers such as Geoffrey Hinton of the Vector Institute is finding new ways to mimic the human brain. At the end, there could be a hybrid of human tissue married with computer chips. For now, the field of Machine Learning is using increasingly deep neural networks to achieve simple tasks such as face recognition and enhancing the capabilities of voice based assistants.

Machine Learning: Teaching Baseball To A Child

If one were to summarize various ML techniques in play today, it would look something like the flowchart below:

Source: Pinterest

Let’s take an example of teaching a child named Sam to play baseball. If you were teaching a child to hit a home run, your output would be hitting the ball out of the park and your inputs would be a bat and a ball. The algorithm would be the technique Sam should sue to hit a home run.

At first go, the child hits the ball and the ball lands just short of the boundary. You observe the technique i.e. stance, grip, force, hand-eye coordination and assign weights to each factor say 0.25 to each factor. If you feel that by hitting the ball harder i.e. assigning more weight to force will help achieve the objective, you instruct to hit the ball harder. In ML speak, that would mean putting more weight on force (say 0.35) and less on some other factor let’s say stance (say 0.15). If by doing so, Sam hits a home run, you have achieved your objective.

In a nutshell, you look at the output and if it’s not the desired outcome, you go backwards and adjust the weights until the algorithm uses the inputs to produce the desired output using what is called “ Back Propogation” . This way of teaching computers to do routine tasks such as recognize a dog as a dog is known as “Supervised Learning”. Supervised Learning is used to accomplish facial recognition, voice recognition etc.

Back Propogation by datathings.com

When ML is directed towards inferring patterns from massive amounts of data that humans cannot crunch, it uses a technique called “unsupervised learning” which basically involves throwing tons of data at an ML algorithm in order for the algorithm to find certain patterns including fraudulent behavior. Here, the data set is the computer equivalent of human experience. In our example above, if Sam practiced more, he would get better at hitting home runs. Likewise, if more data were presented to the ML algorithm, it would get better at pattern recognition. Unsupervised Learning is increasingly being used in cyber security for reconnaissance and lateral movement to detect abnormal behavior.

The next ML technique is “Reinforcement Learning”. Think of Reinforcement Learning as an approach to an optimization problem. Let’s say each time Sam hits a baseball closer to the boundary of the field, you reward him with ice cream until he learns the minimum number of times he would need to hit a ball to hit a home run. This is just an oversimplified illustration. The more harder games Sam could be taught to play would be chess, checkers and the ancient Chinese game of Go. In technical terms, reinforcement learning looks like the graphic below

Source: MIT.edu

Capsule Neural Networks

A recent advancement in Deep Neural Networks is Convolutional Neural Networks (CNN, or ConvNet) which is a class of deep, feed-forward artificial neural networks that has successfully been applied to analyzing visual imagery. In our example above, the way Sam recognizes a ball as a ball irrespective of what angle he is looking at the ball is using certain key indicators such as the color, seam and contours. In a similar manner , CNN’s use simple data points to achieve face recognition such as eyes, ears, nose etc. to perceive the fact that it is looking at a face irrespective of the angle.

Geoffrey Hinton, the pioneer behind back-propogation, introduced two new papers explaining his latest Capsule Theory of machine learning. Capsules or a group of nodes representing human neurons are able to quickly understand and recognize an object such as a ball without using thousands of pictures to achieve the same task that Sam (a human) achieves. It looks a the seam, contour and colors of a ball to identify it as a ball instead of looking at 10 different pictures as current ML algorithms do.

Universal Basic Income

The advances in AI are driving a debate around unemployment and Universal Basic Income. Sam Altman (founder of Y! Combinator) , in a blog post, revealed yet another idea. He says: “I think that every adult US citizen should get an annual share of the US GDP”. He adds: “American Equity would also cushion the transition from the jobs of today to the jobs of tomorrow. As interesting as the technology behind AI is the debate surrounding UBI. No surprise then that my recommendation is to stay tuned to the economic effects of a disruptive technology.

Resources

Just Like Children Learning To Code

Today, Google celebrated 50 years of children learning to code and dedicated the Google Doodle to that celebration. Google’s website reports: “In the 1960’s, long before personal computers, Seymour Papert and researchers at MIT developed Logo — the first coding language designed for kids”

Building up on Papert’s ideas, Kids can use the Scratch Programming Language to learn to code.

Programming Language To Begin Learning: Python

There are many programming languages you can use to learn to code. However, the one that I highly recommend is Python. Unlike other languages, Python is an interpreted language i.e. the interpreter interprets every line of code in real time as opposed to a compiler which compiles the program after the complete code is written. There are a lot of tutorials to help you get started with Python.

Waikato Environment for Knowledge Analysis (WEKA)

A great Machine Learning (ML) tool to use irrespective of your programming skills is WEKA. As per wikipedia:

“Waikato Environment for Knowledge Analysis (Weka) is a suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. It is free software licensed under the GNU General Public License. Weka (pronounced to rhyme with Mecca) contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to these functions.

For a lot of advanced applications, some of the platforms that are widely leveraged are Google’s Tensor Flow and IBM Watson.

People to follow

Some of the experts that I highly recommend following to keep abreast of developments in AI include Geoffrey Hinton, Andrew Ng and Demis Hassabis. In addition, here is a list of other experts to follow on twitter.

Parting Thoughts

I am no expert in Artificial Intelligence. In fact, I am a complete newbie. However, my current research and efforts to learn more about Machine Learning have expanded my horizons to see some of the changes coming down the pike. I recommend all my readers to start learning on their own. The era of “unsupervised (i.e. outside the classroom) home schooling” has arrived. It might be harder to learn on your own just because it takes a lot of discipline to keep showing up at the laptop.

However,to prevent atrophy, exercise is indispensable. Therefore, in an age where AI is poised to completely disrupt the labor market, exercising your mental muscles is precisely what you should do or risk an atrophy called “economic obsolescence”

Writer @ The Intersection of Finance, Tech & Humanity. Stories of a Global Language: “Money”. Contributor @ Startup Grind, HackerNoon, HBR. Twitter@akothari_mba

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store