Can machines think?
It was in 1950 that Alan Turing, an English mathematician, published a paper titled “Computing Machinery and Intelligence” and famously proposed “Can machines think?” that sparked the idea that computers can be programmed to think.
Even though the first reference of the idea that reasoning could be artificially implemented on a machine dates back to the 14th century, Turing is considered as the forefather of AI.
The term artificial intelligence was coined by John McCarthy in 1956, who is considered the true father of artificial intelligence. It was all uphill from there.
By 2025, the global AI market is expected to reach nearly US$60 billion. And by 2023, there will be about 8 billion units of digital assistants in use – which is a greater number than our current population.
In the first half of the 20th century, how artificial intelligence works or will work was still a mystery that needed years of technological advancement to gather enough explanation.
Then there were these sci-fi blockbusters like 2001: A Space Odyssey, The Terminator, and I, Robot, that straightaway showed the darker side of AI. Separating fact from fiction, we are far away from creating humanoids that can think and act better than us, humans.
Knowing how artificial intelligence works will give you more insight into why the same technology may take years to exceed our expectations. Here’s everything you need to know about the whats, whys and hows of artificial intelligence in the 21st century.
What Is Artificial Intelligence?
Artificial intelligence (AI) is defined as the branch of computer science that covers everything that has anything to do with building machines capable of performing tasks that require human intelligence.
AI can also be described as anything that mimics human intelligence. This field draws aspects of computer science, computer programming, mathematics, statistics, psychology, neuroscience, linguistics, cybernetics, economics and more.
When a machine is capable of learning from data, understanding it, and using the acquired knowledge to do something, then it exhibits artificial intelligence. AI can also be defined as a set of algorithms that can produce results without having to be specifically directed to do so.
In short, AI is an intelligent entity created by humans and has the ability to think and act humanely and rationally,
Here are some artificial intelligence examples you come across every day.
- Your email account’s spam filter
- Siri, Google Assistant, and Alexa
- The fastest route suggested by Google Maps
- Face recognition on your smartphone
- The auto-correction and prediction of your smartphone keyword
There are numerous tests that allow us to measure the human-likeness of an AI, including:
- Turing Test – An AI entity must be able to converse with a human, and the human shouldn’t be able to conclude that they’re talking to an AI.
- The Rational Agent Approach – When there is no logical right thing to perform, a rational agent tries to achieve the best possible outcome.
- The Cognitive Modelling Approach – This approach tries to build an AI based on human cognition.
- The Law of Thought Approach – A vast list of logical statements that controls the working of our mind, which can be coded and applied on AI algorithms.
As these tests measure the human-likeness of an AI, some scientists have serious concerns about them. In the future, AI may become so advanced that it can realise it’s being put to a test, and so, in order to not expose the level of intelligence it has, it may choose to play dumb.
Components of Artificial Intelligence
To fully understand how artificial intelligence works and how we can harness its power, you need to be aware of the technologies and components that make AI viable. They are known as subdomains of AI and help in reverse engineering human intelligence in a machine. Let’s take a look at these components.
1. Machine Learning
Machine learning (ML) presents computers with the ability to learn and improve from experience without requiring any explicit programming. ML uses statistical methods and makes data-driven decisions in order to perform specific tasks. ML algorithms are designed to periodically improve by learning and adapting to newer data exposed to them.
For that, algorithms capable of identifying patterns, analysing data and making predictions are developed. And they perform the prediction part without any inputs from humans.
To put that in perspective, think of the straight line equation you learned in high school, or the simple linear regression (y=mx+c), which is one of the most straightforward statistical models used for machine learning.
Using simple linear regression, you can predict your grumpiness based on the number of hours you sleep.
Image Credit: learningstatisticswithr.com
The applications of big data in Ecommerce, Netflix predicting which movies you might like, and Google Maps suggesting the best route are all made possible with ML.
Machine learning also assists the healthcare industry. Here’s how.
- ML is extensively used by pharma companies to estimate the success rates of drugs by analysing data of their compounds and biological factors associated.
- By collecting data from multiple sources (Electronic Health Records (EHRs), social media, and more) AI and ML algorithms can predict epidemic outbreaks.
- By analysing a patient’s medical history, ML algorithms can help doctors provide personalised treatment for better results.
2. Deep Learning
Deep learning (DL) is a subset of ML that utilises artificial neural networks to teach machines to process data. Various layers of artificial neural networks are used to classify, deduce and determine a single output from numerous inputs.
The learning takes place by adjusting actions based on continuous feedback. For every right action, the system is rewarded and for the wrong one, punished. Actions are adjusted to maximise a reward.
To put that in perspective, think of “tricks for treats” training of dogs. Every time your doggy does the desired action, it gets a treat. When it does something else, you don’t give the treat. This way, it learns to maximise the treat by doing the desired action.
Artificial Neural Networks
As previously mentioned, artificial neural networks make deep learning possible. They are computer systems built to mimic the biological neural networks of our brains. The artificial counterparts of neurons (our brain’s working unit) are perceptrons.
Vast numbers of perceptrons are stacked together to form artificial neural networks. Neural networks analyse vast volumes of data to find associations and define previously undefined data.
For example, suppose you deliver 10,000 images of dogs as training examples to a machine utilising neural networks to learn. In that case, it will process the images and will become capable of answering whether a given picture is that of a dog or not.
3. Natural Language Processing (NLP)
Natural language processing (NLP) enables computers to read, understand, interpret, and produce human language. NLP aims to make communication between humans and machines conversational.
As the meaning of words differs based on the context, NLP enables machines to understand such differences and produce the most logical responses. The majority of AI voice-assistants use NLP.
4. Computer Vision
Computer vision is a method that uses pattern identification and deep learning to identify and interpret the content of an image. Visual data, including tables, graphs, images within documents, and content of videos can be processed and analysed.
Computer vision enables the healthcare industry to quickly diagnose patients and evaluate their X-ray scans, which is otherwise a tedious task for humans.
Similarly, computer vision in manufacturing automates visual inspections on assembly lines, replacing tedious and error-prone manual checks with swift and accurate machine vision.
Of course, these technologies alone don’t artificial intelligence possible. Rapid advancements in computational speeds, graphics processing units (GPU), internet speeds, and storage capacities significantly contributed to the success of AI.
Moore’s Law estimates that the number of transistors that can be included on a microchip doubles every year, although the cost of computers has halved. This means that, in 2021, you can own a computer which is twice as fast as the 2020 model, but for the same price.
Many experts suggest that we haven’t gotten any smarter about how we code artificial intelligence into existence as compared to what was done 30 years ago. The difference that caused exponential growth in the field of AI is that we were finally able to catch up with Moore’s Law.
How Artificial Intelligence Works?
Image
How artificial intelligence works can be loosely related to how our brains work – after all, it’s trying to mimic what we gained through years of evolution. Machines are trained to analyse and understand information and adapt based on their learnings. The components discussed previously together aid in perfecting the mimicking of the human brain.
How AI actually works is a question that can be explained in different ways in different contexts. For the sake of simplicity, let’s start right from the very basics.
Firstly, you need to understand that not all tasks performed by an AI are as complicated as you think. For instance, if you want to build an AI robot that brings you a glass of water each time you’re thirsty, something as simple as if thirsty → then bring water would do the trick.
But things can be as complex as the demand forecasting models of Amazon that predict the demands of products by taking into account varied factors. The point here is that AI doesn’t always have to be complicated and overwhelming all the time.
If you’re wondering what makes an AI useful and robust, it’s its ability to learn and make decisions without any human intervention.
To put that in perspective, consider your laptop. It does what you tell it to do – nothing more or less. Suppose, you’ve got a vast collection of movies stored locally on your laptop. Every Friday, you watch one of them to shift from work to life.
But various things may affect your decision about which movie to watch. Some Fridays you may prefer to watch rom-coms, on some weeks it’s action, or drama or thriller and so on.
Your decision is affected by your mood, and your mood is determined by multiple factors such as weather conditions, temperature, date, commitments for the following day, occupants at home, and the time left for bedtime.
With your traditional laptop, you’ll have to decide for yourself. You’ve to follow your “gut instinct” and choose a move best suited for your mood. And also, you’ll have to manually navigate through the movie folders, identify the movie file, and then open the video player.
Why Netflix’s Algorithm Is So Binge-Worthy
If you instead have artificial intelligence in place, all you need to do is grab a bag of chips and wait for the AI to play a movie. And the film will be the perfect one for your mood by taking into account all the factors that affect your decision.
And if the movie suggested is something you don’t want to see, AI will learn from it and apply the learnings to future decisions. The more you utilise and interact with the AI, the better it gets.
Making decisions and predictions are something that AI can perform with the assistance of machine learning – which has subsets such as deep learning and artificial neural networks to learn from the mistakes and apply it to future decisions.
There are multiple types of artificial intelligence, and not all are the same. Some of them, like your smartphone’s voice assistant, may stutter if you ask something abstract like the meaning of life unless it’s specified somewhere on the internet.
Then there are advanced versions of AI that can outsmart and outperform us and may even nuke the entire planet into dust. To get more insights into artificial intelligence and how it works, let’s look at its three types.
What Are the 3 Types of AI?
1. Artificial Narrow Intelligence (ANI)
Also known as narrow AI or weak AI, artificial narrow intelligence (ANI) is a type of AI that is focused on a particular narrow task. This is the only type of artificial intelligence that is in existence for now.
This type has a narrow range of abilities, and most of us interact with them on a daily basis. Google Assistant, Siri, Alexa, and Cortana are examples of weak AI. Although labelled as “weak”, weak AI is good at performing routine tasks and other tasks such as demand forecasting, product recommendation, and weather forecasting.
Even self-driving cars are made possible with narrow AI – not by just one, but the coordination of several narrow AI. This is because the AI engine that helps in image recognition cannot perform the tasks of the AI that decides the optimal speed to drive.
This principle extends to the manufacturing sector as well. AI-powered robots are revolutionizing production lines, performing tasks like welding, painting, and assembly with significantly improved accuracy and efficiency. Additionally, predictive maintenance systems, powered by ANI, can analyze sensor data from machines to predict potential equipment failures before they occur. This proactive approach helps to reduce downtime and ensure smooth operation.
2. Artificial General Intelligence (AGI)
Also known as strong AI or deep AI, artificial general intelligence (AGI) is a theoretical form of artificial intelligence that is equal to human intelligence. This field is steadily progressing, and experts suggest that by 2060, it’s most likely that we would have created an AGI.
A system capable of human-level thinking also means that it has attained consciousness. The greatest fear about AI starts with strong AI and is also referred to as singularity.
By singularity (also known as technological singularity) scientists suggest that the technological growth will be so advanced and exponential that the human civilization may experience irreversible and uncontrollable changes – which are currently beyond our imagination levels.
Even if the first AGI we create doesn’t have consciousness per se (although it technically should be, as consciousness is a unique characteristic of the human brain), it can continually improve itself and make our capabilities look inferior.
Strong AI will have the ability to communicate, make judgements, plan, solve problems, and even reason. Since they are granted the gift of consciousness, they will be self-aware, have objective thoughts, and can grow wiser.
If strong AI goes rogue, experts suggest it will be the beginning of the end of the human race, and soon it may develop itself to the next level of AI. Strong AI also raises the concern that many might lose their jobs to machines.
The killer robots in the movie I, Robot, HAL 9000 of 2001: A Space Odyssey, and the T-800 of The Terminator are all examples of strong AI.
However, many suggest that strong AI may be something of the 22nd century or may not be possible at all. If AGI does become a reality and doesn’t plan to annihilate us, we will be able to effortlessly conquer other solar systems and get rid of repetitive jobs.
3. Artificial Super Intelligence (ASI)
Also known as super AI, artificial super intelligence (ASI) is the AI that can surpass human intelligence and abilities. Once we create such an advanced version of AI, our capabilities will look infinitely inferior to it.
Super AI will beat us at everything we do – science, maths, sports, relationships, medicine – everything. Its decision-making capabilities will be far superior to ours and sort of incomprehensible to us.
The Skynet of The Terminator is an excellent example of this AI type. It will be the best at everything and if gone rogue, will find it just a matter of time to enslave or overthrow humans. Only once we attain AGI can we think of implementing ASI – meaning, you probably won’t live to see an ASI come into existence – unless we crack the secret of immortality in a few decades.
If we somehow do create a super AI, the chances are that it will strive for self-preservation and the thought of a “turn off switch” may tempt it to eradicate the human race. After all, there are fewer reasons to suggest why a machine with superior intelligence should listen to a group of dumb creatures.
There are also chances that super AI can be our next big step towards evolution. We may evolve to an advanced species that combines biology and robotics and may make us something far beyond our wildest dreams.
Applications of Artificial Intelligence
It’s safe to say that AI becomes more robust and useful with big data. Utilising AI will help businesses with a productivity boost of 40% and an increase in profitability by 38%. Here are some of the use cases of AI in the real world.
AI in the Manufacturing Industry
The manufacturing industry is undergoing a significant transformation driven by AI. Here are some key areas where AI is optimizing processes and boosting efficiency:
AI-powered optimisation
- Predictive maintenance: Predictive maintenance (PdM) takes a data-driven approach to keeping equipment healthy. It continuously monitors and analyzes an asset’s performance, status, and overall health in real time. This enables proactive maintenance scheduling, preventing costly downtime and ensuring smooth operations
- Quality control: AI-powered computer vision systems can inspect products on the assembly line with high accuracy, identifying defects that might be missed by human inspectors. This leads to improved product quality and reduced waste
- Robot-assisted manufacturing: Collaborative robots powered by AI are increasingly used alongside human workers to perform tasks like welding, assembly, and material handling. This improves efficiency, safety, and consistency in production.
Big data drives efficiency
Manufacturing generates vast amounts of data on production, logistics, and equipment performance. Big data analytics combined with AI unlocks valuable insights that can further optimize processes:
Supply chain optimization: Traditional supply chains can be intricate and vulnerable to disruptions. Imagine a domino effect – a delay in one area throws the entire system off balance. Big data analytics, however, acts as a powerful taming tool.
By analyzing real-time data on production, demand, and logistics, manufacturers can gain a holistic view and optimize their supply chains in several ways:
- Predictive inventory management: Forecasting demand fluctuations allows manufacturers to keep the perfect amount of inventory on hand. This sweet spot eliminates the risk of stockouts that halt production or overstocking that ties up valuable resources.
- Improved transportation efficiency: AI steps in as a logistics mastermind. Leveraging route, traffic, and fuel data, AI optimizes delivery schedules for both timeliness and reduced transportation costs.
- Enhanced supplier collaboration: Real-time data sharing fosters a collaborative environment with suppliers. Imagine transparent visibility into production bottlenecks or sudden demand surges. This allows for better coordination and a faster response, keeping the entire supply chain running smoothly.
AI in IT
The IT sector is embracing Artificial Intelligence for its transformative capabilities. AI enhances security through real-time threat detection, proactively identifying and combating cyberattacks. User support is streamlined with AI-powered chatbots providing 24/7 assistance, freeing IT professionals for complex tasks. Network optimization is achieved by AI analyzing traffic data to eliminate bottlenecks and ensure smooth data flow.
Furthermore, AI leverages big data to further optimize IT performance. Proactive network analysis identifies potential issues before they impact users, while user behaviour analysis personalizes IT services based on user needs. Finally, AI extracts valuable insights from log data, enabling preventative maintenance and improved system stability.
Other examples of AI applications include:
AI Powers Google’s RankBrain Algorithm
RankBrain is a machine-learning algorithm used by Google to sort the search results. It was 100% hand-coded into existence but now can tweak the algorithm on its own.
Depending on user satisfaction levels, the algorithm will increase or decrease the importance of SEO factors such as domain authority, content length, backlink, and content freshness.
Ride-Sharing Apps Utilise AI
Ridesharing apps like Uber and Lyft extensively uses artificial intelligence for profit maximization and route optimization. They use machine learning to predict spikes in rider demand and introduce a surge charge.
Uber Eats also uses AI to detect fraudulent activities in the system. For example, users may exploit certain loopholes such as refund policies. Using AI and ML, fraudsters can be differentiated from the customers who are actually experiencing an issue.
IBM Watson Uses AI and Big Data Analytics
Here’s how IBM Watson can help beat breast cancer.
IBM Watson is a question-answering machine powered by AI and big data analytics. While it takes nearly 10,000 weeks (almost 200 years) for a doctor to read and analyse ten million patient files, Watson does the same in just 15 seconds.
Watson learns from each patient file or medical paper fed into it and can also help doctors understand the relevance and application of newly added information. Watson utilises NLP to understand and process the inputs from doctors (and patients) and updates it back to the system for future application.
It has the potential to suggest the best and latest evidence-based treatment and can also enable doctors to prescribe medicines that best suit a patient’s lifestyle. Watson can also reduce the time taken to screen cancer patients by 78%.
Chase Bank Gets More Creative with AI
The ad copy created by a human (left) vs artificial intelligence (right) for Chase Bank. Image Credit: inquirer.com
Chase Bank partnering with Persado, a company that uses AI on marketing, is an excellent example of how artificial intelligence works in marketing. With the help of ML, Chase Bank was able to make their marketing efforts more human.
Persado helped Chase Bank by employing an AI copywriter which outperformed the ads written by humans. The ad copy written by the AI received higher click rates – with some of them receiving more than double the click rates as compared to the copy written by humans.
Final Thoughts
Whether artificial intelligence would cause a utopian or dystopian future is still an unanswered question. Although many, including Stephen Hawking and Elon Musk, fears that AI can outsmart us and may spell the end of the human race, they’re reasonably optimistic and supportive of the benefits it can bring to us.
Mishaps such as the one caused by Microsoft’s chatbot Tay makes it evident that currently, AI needs more help from us than it can offer to us. However, it’s progressing at a rate the forefathers, who sparked the revolution, will be proud of. And the fact is, around 77% of the devices we use, utilise AI in one form or another.