Machine Learning and AI
What is machine learning?
Machine Learning is a branch of AI (Artificial Intelligence) computing. This technique focuses on training and developing computing models which can make better decisions. The ongoing boom in the data industry means that humans cannot possibly process and make decisions for the massive volume of data being generated by devices every minute. Human error is natural, and so it’s no guarantee that humans would be able to process all that data accurately. Machine learning models use pre-constructed and labelled datasets to make decisions. Once the model is able to “learn” and compute accurate results, the model can be employed in an application or an industrial requirement.
Machine Learning at the edge simply refers to the data processing performed by AI models on the devices or servers at the edge of a network. This avoids using a centralized cloud and the issues caused by high network traffic.
What is the edge?
Clouds services allow users to access servers and storage for their online applications. These cloud services handle all of the set-up and maintenance, but the cloud can run into problems, especially when the data input is in large volumes.
One problem for a centralized cloud is latency, since all data is stored in a single location. The edge introduces hyper proximity to the data sources so that data can be accessed closer to where it is needed. The edge is a point where a device accesses the network and can include any number of edge computing devices connected to the internet or a wireless service. This enables real-time and near-real-time processing of data results in lower latency and reduced load on central servers.
When these ML models are deployed on IoT devices, such as mobile phones, cameras, motion sensors, etc, the data is sent to the cloud where it is processed by a model and the output is then sent back to the device.
Machine learning models
Machine learning aims to equip a machine with the mechanism to make decisions based on learning from given data. There are two types of learning processes used for training a machine learning model:
The model is fed data and the expected results, this is known as a labelled dataset. The model is provided with the input and what it should be giving as an output, consequently, it learns to connect the dots and tune itself to learn patterns that result in the desired output.
For example, a simple classification application which you would like to use to distinguish between cats and dogs. The dataset would usually consist of images of dogs and cats, each one being labelled as such. The model will eventually learn image features of cats and dogs and will be able to distinguish both.
After you have trained the model, you can test or inference the model by giving a random image of a cat or a dog without any label and which wasn’t a part of the dataset previously used to train the model. This image(s) will be a part of the test data. If trained properly, it should be able to correctly classify a dog and a cat based on the images only.
Unlike supervised learning, the data provided for the purposes of unsupervised learning is not labelled; it is expected that the model would eventually be able to identify patterns itself.
One example is clustering, this technique is used by data-driven marketing techniques for targeted advertisements or content recommendations such as that on Netflix where the machine learning model is able to cluster users based on relevant features such as age, gender, interests etc.
The purpose of both training methodologies is to eventually enable a machine to perform decision making based on experience.
Deep learning is an unofficial third machine learning model that incorporates elements of supervised and unsupervised learning.
How do the edge and AI work together?
Whenever the ML or AI models have to be deployed in real-world applications, they have to be stored at servers or storage locations that are accessible by end devices through a network.
End devices such as cameras, sensors are constantly collecting and sending data to the cloud where the models process it, inference is done at run-time and results communicated back to the same end-devices or relevant locations. However, when the volume of data increases, this traditional information pipeline results in latency.
Many applications that operate in a fast paced environment such as self-driving cars or flood warning systems, would need to make decisions fast. In such cases, the models are stored locally at the machines or on edge servers located closer to the end devices. These are just a few examples of edge AI. It is also known as embedded AI/ML.
Embedding AI/ML solves problems caused by latency, distributes load on central servers and reduces computing costs due to lesser need of transferring data to and from end locations to the cloud/servers.
How AI makes better data
While a model might not be able to achieve 100% accurate results, it is able to process data and give results at a much much faster rate than a human, hence it has a wide range of applications to enhance efficiency, productivity and business value.
Applications that can recognize gestures and activities such as sign languages for gesture-based interaction with an integrated smart home system, fall detection in case of an accident and detection of trespassing on private property.
Models have been developed which are able to do complex data analysis on traffic patterns of a city and hence execute optimal traffic signal control protocols to reduce congestion and instances where the likelihood of an accident is higher.
Agriculture-based IoT devices can use models for monitoring of crops, reporting and reacting to sudden changes in weather, pests and other factors.
A multi-IoT integrated health system for a patient which is constantly monitoring vitals and alerts and initiates live consultation with relevant practitioners in case of any unusual recordings. This method is also more convenient because it can operate at home and workplaces using wearable devices hence much more convenient.
Edge computing brings the computing centers closer to end devices to reduce latency, computing costs and increase security.
Machine Learning/Artificial intelligence models are able to learn based on labelled or unlabelled datasets providing faster data processing which is imperative in today’s environment where huge amount of data has to be constantly processed and decisions need to be made fast.
ML/AI can be merged with edge computing for a wide range of applications in housing, urban management, agriculture, healthcare, and much more. ML/AI can provide faster, intelligent, and convenient user experiences.