In today’s fast-paced world, where information travels at the speed of light and decisions are made in the blink of an eye, a silent revolution is taking place. Picture this: You’re navigating through the labyrinth of online shopping, and before you even type a single letter into the search bar, a collection of products appears, perfectly tailored to your taste. You’re on a video call with a friend, and suddenly, in real-time, your spoken words transform into written text on the screen with an eerie accuracy. Have you ever wondered how your favorite social media platform knows exactly what content will keep you scrolling for hours?
Welcome to the era of Artificial Intelligence (AI), where the invisible hand of technology is reshaping the way we live, work, and interact with the world around us. As we stand at the crossroads of innovation and discovery, the profound impact of AI is becoming increasingly undeniable.
In this blog, we embark on a journey to unravel the mysteries of Artificial Intelligence (AI), Machine Learning (ML), and DL (Deep Learning) where they not only keep pace with the present but, set the rhythm for the future.
Demystifying the trio – AI, ML, and DL
The terms Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are often intertwined.
At a very high level, DL is a subset of ML, which in turn is a subset of AI.
AI is any program that can sense, reason, act, and adapt. It is essentially a machine taking any form of intelligent behavior.
ML is a subset of that, which can replicate intelligent behavior, but the machine continues to learn as more data is exposed to it.
And then finally, DL is a subset of machine learning. Meaning, that it will also improve as it is exposed to more data, but now specifically to those algorithms which have multi-layered neural networks.
Deep Dive into ML
Machine Learning is the study and construction of programs that are not explicitly programmed by humans, but rather learn patterns as they’re exposed to more data over time.
For instance, if we’re trying to decide whether emails are spam or not, we will start with a dataset with a bunch of emails that are going to be labeled spam versus not spam. These emails will be preprocessed and fed through a Machine Learning algorithm that learns the patterns for spam versus not spam, and the more emails it goes through, the better the model will get. Once the machine algorithm is trained, we can then use the model to predict spam versus not spam.
Types of ML
In general, there are two types of Machine Learning: Supervised Learning and Unsupervised Learning.
For supervised learning, we will have a target column or labels, and, for unsupervised learning, we will not have a target column or labels.
The goal of supervised learning is to predict that label. An example of supervised learning is fraud detection. We can define our features to be transaction time, transaction amounts, transaction location, and category of purchase. After combining all these features, we should be able to predict the future for a given transaction time, transaction amount, and category of purchase, whether there’s unusual activity, and whether this transaction is fraudulent or not.
In unsupervised learning, the goal is to find an underlying structure of the dataset without any labels. An example would be customer segmentation for a marketing campaign. For this, we may have e-commerce data and we would want to separate the customers into groups to target them accordingly. In unsupervised learning, there’s no right or wrong answer.
Machine Learning Workflow
The machine learning workflow consists of:
- Problem statement
- Data collection
- Data exploration and preprocessing
- Modeling and fine-tuning
- Validation
- Decision Making and Deployment
So, our first step is the problem statement. What problem are we trying to solve? For example, we want to see different breeds of dogs. This can be done by image recognition.
The second step is data collection. What data do we need to solve the problem? For example, to classify different dog breeds, we would need not only a single picture of each breed but also, tons of pictures in different lighting, and different angles that are all correctly labeled.
The next step is data exploration and preprocessing. This is when we clean our data as much as possible so that our model can predict accurately. This includes a deep dive into our data, a look at the distribution counts, and heat maps of the densest points regarding our pixels, after which we reach the next step, modeling. This means building a model to solve our problem. We start with some basic baseline models that we’re going to validate. Did it solve the problem? We validate that by having a set of pictures that we haven’t trained our model on and see how well the model can classify those images, given the labels that we have.
Then comes decision-making and deployment. So, if we did a good job of getting a certain range of accuracy, we would move forward and put this in a higher environment (that includes Staging and Production) after communicating with the required stakeholders.
Deep Dive into Deep Learning (DL)
Defining features in an image, on the other hand, is a much more difficult task and has been a limitation of Traditional Machine Learning techniques. Deep Learning, however, has done a good job of addressing this.
So, suppose we want to determine if an image is a cat or a dog, what features should we use? For images, the data is taken as numerical data to reference the coloring of each pixel within our image. A pixel could then be used as a feature. However, even a small image will have 256 by 256 pixels, which comes out to be over 65,000 pixels. 65,000 pixels means 65,000 features which is a huge number of features to be working with.
Another issue is that using each pixel as an individual means losing the spatial relationship to the pixels around it. In other words, the information of a pixel makes sense relative to its surrounding pixels. For instance, you have different pixels that make up the nose, and different pixels that make up the eyes, separating that according to where they are on the face is quite a challenging task. This is where Deep Learning comes into the picture. Deep Learning techniques allow the features to learn on their own and combine the pixels to define these spatial relationships.
Deep Learning is Machine Learning that involves using very complicated models called deep neural networks. Deep Learning is cutting edge and is where most of the Machine Learning research is focused on at present. It has shown exceptional performance compared to other algorithms while dealing with large datasets.
However, it is important to note that with smaller datasets, standard Machine Learning algorithms often perform significantly better than Deep Learning algorithms. Also, if the data changes a lot over time and there isn’t a steady dataset, in that case, Machine Learning will probably do a better job in terms of performance over time.
Types of Libraries used for AI models:
We can use the following Python libraries:
- Numpy for numerical analysis
- Pandas for reading the data into Pandas DataFrames
- Matplotlib and Seaborn for visualization
- Scikit-Learn for machine learning
- TensorFlow and Keras for deep learning specifically
How is AI creating an impact for us today? Is this era of AI different?
The two spaces where we see drastic growth and innovation today are computer vision and natural language processing.
The sharp advancements in computer vision are impacting multiple areas. Some of the most notable advancements are in the automobile industry where cars can drive themselves. In healthcare, computer vision is now used to review different imaging modalities, such as X-rays and MRIs to diagnose illnesses. We’re fast approaching the point where machines are doing as well, if not better than the medical experts.
Similarly, natural language processing is booming with vast improvements in its ability to translate words into texts, determine sentiment, cluster new articles, write papers, and much more.
Factors that have contributed to the current state of Machine Learning are:
- Bigger data sets
- Faster computers
- Open-source packages
- Wide range of neural network architectures
We now have larger and more diverse datasets than ever before. With the Cloud infrastructure now in place to store copious amounts of data for much cheaper, getting access to powerful hardware for processing and storing data, we now have larger, finer datasets to learn underlying patterns across a multitude of fields. All this is leading to cutting-edge results in a variety of fields.
For instance, our phones can recognize our faces and our voices, they can look at pictures and identify pictures of us and our friends. We have stores where we can walk in and pick things up such as Amazon Go and not have to go to a checkout counter. We have our homes being powered by our voices telling smart machines to play music or switch the lights on/off.
All of this has been driven by the current era of artificial intelligence. AI is now used to aid in medical imaging. For drug discovery, a great example is Pfizer which is using IBM Watson to leverage machine learning to power its drug discovery and search for immuno-oncology drugs. Patient care is being driven by AI. AI research within the healthcare industry has helped advance sensory aids for the deaf, blind, and those who have lost limbs.
How Fresh Gravity can help?
Fresh Gravity has rich experience and expertise in Artificial Intelligence. Our AI offerings include Machine Learning, Deep Learning Solutions, Natural Language Processing (NLP) Services, Generative AI Solutions, and more. To learn more about how we can help elevate your data journey through AI, please write to us at info@freshgravity.com or you can directly reach out to me at debayan.Ghosh@freshgravity.com.
Please follow us at Fresh Gravity for more insightful blogs.
Leave a Reply