×

Happy to Help!

This website doesn't store cookies. Enjoy the experience, without worrying about your data!

Great, thanks!

Monthly Archives: January 2024

  • 0

IoT-Driven Driver Health Management Systems – Towards a Safer Driving Experience

Category : Embedded Blog

A driver’s role extends beyond steering and acceleration. Picture the open road, the engine’s hum harmonizing with passing scenery; yet this tranquility belies an ever-present threat: unprecedented driver health issues that could potentially have disastrous implications road safety.

As we accelerate towards a future of heightened transportation demands, the dangers of driver fatigue and health-related impairments loom large. A momentary lapse in focus could cascade into catastrophe.

To address this, envision a new era where vehicles function as vigilant guardians. Driver health management systems detect early signs of fatigue, stress, and underlying health conditions via body vitals, intervening to ensure alertness, well-being, and prompt actions.

The Foundation: IoT in Monitoring Driver Health

In our tech-infused transportation landscape, the imperative for such systems is clear. Understanding the dangers hiding beneath the surface propels us towards cutting-edge solutions. It’s not just about safeguarding vehicles; it’s about protecting lives.

The function of IoT enhances the connectivity capabilities of any modern vehicular technology solution. IoT sensor nodes play a crucial role as they collect data related to body vitals. Through this blog, let’s take a look at how using IoT as a foundation, researchers are trying to develop life-saving on-road technologies in the form of driver health management systems.

Vital Signs in Driver Health Monitoring

The critical parameters tracked in these driver health monitoring systems are vital body signs such as heart rate (HR), respiration rate (RR), electrocardiogram (ECG), electromyography (EMG), and galvanic skin response (GSR).

How do these vital signs play a role in defining a driver’s health? Let’s find out.

Vital Signs What do they measure?
Heart Rate (HR) The focus of heart rate monitoring is to assess the general rhythm and rate of the heart. It is a broad measure of cardiovascular activity and can be indicative of overall physical exertion, stress, or relaxation.
Electrocardiogram (ECG) ECG provides a graphical representation (electrocardiogram) of the heart’s electrical activity over time. It can identify specific patterns, irregularities, and abnormalities in the heart’s rhythm. These abnormalities may indicate such as cardiac conditions such as arrhythmias and atrial fibrillation.
Respiratory Rate (RR) The focus of respiratory rate (RR) monitoring is to evaluate the frequency and rhythm of breathing. It is a general measure of respiratory activity, providing insights into changes in breathing patterns related to physical activity, stress, illness, or overall well-being.
Galvanic Skin Response (GSR) This measures the change in electrical activity caused on the skin due to the amount of secretion produced by the eccrine sweat gland. GSR is known to detect changes in emotions. The eccrine gland responds to emotions such as sadness, fear, and joy differently. Hence, tracking these parameters will benefit in monitoring driver emotions.

Note: Heart rate monitoring is a broader measure of the heart’s beats per minute, offering a general overview of cardiovascular activity. On the other hand, ECG is a more advanced diagnostic tool that provides detailed information about the heart’s electrical activity and can detect specific cardiac abnormalities.

Non-Invasive Driver Stress or Fatigue Management Systems

Driving around the city, where every scenario is unpredictable, can take any person’s mental peace out for a spin. The constant stress the drivers of modern automobiles face cannot be understated. IoT sensors have replaced electrodes to introduce non-invasive ways to track and manage driver health. These sensors are strategically placed on steering wheels, seats, and seatbelts. On-the-go analysis of the incoming vital health data in these systems is made possible due to IoT cloud & analytics capabilities.

DRIVER HEALTH MANAGEMENT SYSTEM

Considering ECG, EMG, and GSR as Health Parameters.

Here, we analyse the study presented in a recent research report. To continuously track the stress levels of drivers, a team of researchers designed an AI-based Driver Assistance System (AI-DAS).

The main contributions of the proposed system for stress detection in car drivers utilize physiological signals such as ECG, EMG, GSR, and respiration rate to identify mental stress in automotive drivers. The system integrates a three-phase stress detection technique (SDT) consisting of bio signal pre-processing, feature extraction, and classification.

It employs the machine learning model Random Forest (RF), to accurately differentiate between stressed and relaxation states. The RF model is well known to classify and regress problems while considering the outputs of multiple decision trees.

Here, the multiple decision trees are the combinations of different scenarios and health parameters the drivers are exposed to.

The system achieves exceptional accuracy and sensitivity. The ability to process signals quickly and reliably showcases it as a reliable driver health management system. Algorithms such as RF model is one of the many AI/ML algorithms that can be used in developing systems with this functionality.

Considering Heart Rate (HR) and Respiration Rate (RR)

Heart and Respiration In-Car Embedded Non-Intrusive Sensors i.e., the HARKEN system is a comprehensive setup involving a seat sensor, a seat belt sensor, and a Signal Processing Unit (SPU). The functioning of this system is explained in this research paper.

The SPU plays a crucial role in processing real-time sensor data collected from seatbelt and seat cover, which includes capturing physiological signals to monitor fatigue-related physiological activity and prevent car accidents.

This monitoring extends to both mechanical and physiological activity associated with respiration and the cardiac cycle.

One of the innovative aspects of the system involves the integration of smart textile materials into the seat cover and safety belt. This integration allows the system to detect and filter out noise and artifacts caused by the vehicle’s motion.

Additionally, it calculates various parameters such as heart rate variability and respiration signals, presenting this information in a format that can be seamlessly integrated into a fatigue detector.

Moreover, the system overcomes certain limitations of conventional systems by utilizing smart textile materials and addressing issues related to the placement and pressure of safety belt straps for physiological monitoring. This innovative approach aims to enhance the accuracy and reliability of monitoring physiological signals in the context of driving, ultimately contributing to a safer driving experience.

Conclusion

There are several driver health management-related systems in the works that function by predicting and managing health conditions such as stroke.

The Internet of Things (IoT) plays a crucial role in developing life-saving technologies by enabling seamless communication between devices. This creates an intelligent technology ecosystem that thrives on interconnectedness, ultimately leading to better outcomes for those who rely on these technologies.

This ability will keep drivers alert, suggest rest when needed, and quickly report incidents as and when they occur, enabling quick response from first responders.


  • 0

Face Recognition Technology – A Modern Security Solution for Cars

Category : Embedded Blog

In the city of bright lights and sleek rides, a new era unfolded for automobile owners. The vehicles could recognize their owners at a glance and come alive with a futuristic hum. Keys were yesterday; cars in this city identified their owners like old friends with the help of face recognition technology.

In a world where automotive security troubles ranged from the annoying to the dangerous, this technology was a game-changer. No more fumbling for keys—security was as simple as looking at your car and feeling it respond.

As the owners settled into their seats, a sense of trust enveloped them. The car became an extension of them, a smart companion navigating the bustling streets. In a landscape of evolving threats, facial recognition stood as a silent guardian, ensuring the driver and the vehicle were in perfect sync. This is not just a drive but a seamless experience that blends technology and security.

Automotive technology enthusiasts know that this is not just a vision, it’s something we can witness in our cars in the future.

In the current scenario, Automotive OEMs are slowly, but surely, beginning to trust and integrate face recognition as a reliable and modern security feature. Exhibit A – Hyundai’s Genesis GV60.

The growing trend and trust in intelligent solutions set a precedent for such technology to thrive! This article breaks down the vehicular biometric technology in facial recognition.

Snow

Driver Face Recognition – A Modern Safety and Security Solution

Modern security problems require modern security solutions. Take a look at how integrating face recognition in cars can help combat vehicle theft and curb accidents.

The Need for Advanced Automotive Safety and Security

Technology has been evolving at lightning speed, but so are the threats associated with its secure usage. Automobile theft poses a real problem, with 1,001,967 stolen in the US during 2022 alone. Thus, integrating advanced security solutions to protect our prized assets on wheels is the need of the hour.

Face recognition in cars not only helps secure the owners from preventing theft, but it can also ensure driver safety. For instance, the camera in any face recognition system plays a crucial role. With its inputs and tailor-made algorithms, the vehicle can assess the driver’s fatigue and offer necessary safety alerts and suggestions.

The alerts can be delivered through seat vibrations and alarms, while the infotainment system and rear seat entertainment setup can provide suggestions to the driver and passengers if they opt for it.

The Origins

First things first, face recognition technology isn’t new, but integrating it as a car safety feature sure is. Woodrow M Bledsoc and his team were the first to experiment with this futuristic solution between 1964 and 1966. The objective of the experiments was to look into the possibility of recognizing faces through computer programming.

However, due to the novelty at the time, the team faced several challenges to overcome the variability recognized by the computer. Appropriate solutions (algorithms) to nullify the challenges are suggested below:

Challenges reducing the efficiency of Face Recognition Algorithms to nullify the challenges of Face Recognition
Head Rotation Multi-Angle Detection
Tilting 3D Face Recognition
Light intensity and Angles Lighting or Illumination Normalization
Facial Expression and Aging Dynamic Feature Analysis and Independent Component Analysis

Despite achieving accuracy up to 99.97%, some of these problems still haunt the accuracy of facial recognition.

Recognizing Owners by their Facial Features – The Process

A conventional face recognition system typically entails the sequential processes of face detection, feature extraction, and processing.

Face recognition involves comparing an image against a stored repository of faces, aiming to ascertain the identity of the subject portrayed in the input image. Several pivotal factors, including shape, size, pose, occlusion, and illumination, intricately influence this identification process.

Primary face recognition focuses on discerning unique facial landmarks, including nose width, eye dimensions, jaw attributes, cheekbone elevation, and eye separation. This leads to the creation of a distinctive numerical code. Subsequently, the system undertakes a comparative analysis of this numerical code with another image, discerning the degree of similarity between the two pictures.

Automobiles such as cars follow a similar procedure to recognize faces!

Face Recognition

Algorithms that Enable Face Detection and Recognition in Automobiles

The development of AI & ML algorithms is the heart of face recognition technology in cars. They leverage advanced mathematical and computational techniques to extract meaningful features, learn patterns, and accurately predict facial recognition tasks.

While designing technology of this magnitude for automobiles, the following algorithms have resulted in highly accurate solutions:

Eigenface-based methods (PCA algorithm)

Principal Component Analysis (PCA) identifies patterns by transforming correlated variables (pixel values in face images) into a new set of uncorrelated variables called principal components. Eigenfaces are the principal components representing the most significant variations among face images.

The car uses these eigenfaces for recognizing and categorizing new faces, focusing on the features that contribute the most to the overall variability. Principal Component Analysis aims to decrease the complexity of the data by maintaining as much of the original dataset’s variation as feasible.

Linear Discriminant Analysis (LDA)

LDA aims to find a linear combination of features that maximizes the separation between different classes (individual faces). It calculates the between-class (quantifies the variability of data points belonging to the same facial feature) and within-class (quantifies the separation between different facial features) scatter matrices to determine the optimal projection that maximizes differences between the faces of different people while minimizing variations within the same person’s face.

This results in a set of features that effectively differentiates between individuals.

Independent Component Analysis (ICA)

ICA assumes that a face image is a linear combination of independent sources, each representing a specific facial feature. After breaking down the facial features into independent components, the algorithm isolates and identifies individual features. This robust approach allows the model to handle variations in lighting, expressions, and other factors.

Elastic Bunch Graph Matching Technique

This technique represents faces as interconnected nodes connected by elastic bands, forming a graph that captures the spatial relationships between facial features. The elasticity enables the model to adapt to variations in facial expressions or poses, providing a flexible and accurate representation to recognize faces in cars.

Neural networks (NN)

Neural networks consist of layers of interconnected nodes (neurons), each associated with weights that get adjusted during training. These networks learn hierarchical representations of features, capturing complex patterns in face images.

The activation of neurons in the output layer corresponds to the face classification, allowing the model to generalize well to new, unseen faces.

Support Vector Machine (SVM)

Support vector machines (SVM) classify faces by finding the hyperplane that maximally separates different face classes. Think of it like drawing a line between two groups of people in a room based on certain features, like height and hair color. The SVM is the brain deciding the best line (hyperplane) to differentiate the two groups.

So, when new faces come in, it can quickly assign the group they belong to based on their features. The system can create two groups, authorized and unauthorized to grant vehicle access! SVMs are particularly powerful when dealing with high-dimensional feature spaces.

Due to the increased emphasis on developing intelligent technologies, innovators in the automotive industry rely heavily on deep learning and artificial intelligence-based solutions.

Deep Learning and Artificial Intelligence for Face Recognition

Deep learning involves utilizing deep neural networks with multiple layers. It learns intricate features and representations from the dataset. Deep learning for facial recognition involves Convolutional Neural Networks (CNNs) or other specialized architectures. CNNs excel at capturing spatial hierarchies, making them well-suited for image-based tasks like face recognition. Feature extraction and learning are the two other essential aspects of facial recognition, and these components are utilized in making cars safer.

  1. Hierarchical Feature Extraction
    • Deep networks learn features step by step.
    • Early layers focus on basics like edges and textures.
    • Deeper layers handle complex structures like facial contours.
  2. Representation Learning
    • Deep learning is best at finding meaningful patterns.
    • By encoding faces into a space where similar faces are close, deep learning assists in identifying faces under different constraints.
    • It can differentiates between individuals easily.

Through continuous learning and exposure to diverse example, the AI based system’s ability to recognize faces in cars will improve over time. Custom artificial intelligence and machine learning algorithms can help automotive OEMs to unlock a wide range of face recognition-based solutions.

Conclusion

The journey towards widespread adoption of face recognition in cars is just the beginning. IoT solutions are the backbone, connecting the intricate network of sensors, cameras, and processing units. As face recognition technology evolves, the integration of IoT enables real-time communication and continuous improvement.

The synergy between face recognition and IoT empowers vehicles not only to recognize their owners but also to adapt and respond to dynamic safety challenges.

IoT service providers, such as Embitel, armed with expertise in AI and ML services, play a crucial role in shaping the future of automotive security. Our ability to develop and implement advanced algorithms will assist in harnessing the power of AI in the automotive context.


  • 0

Digitization of Container Management and Tracking for a European Shipping and Logistics Leader

 

About the customer

Our customer is a giant shipping and logistics company based in Europe. They have operations globally.

They wanted to associate with an expert consulting team to help them out with technical solutions and implementation for their operations.

Business Challenges

The challenge that our customer was facing was that they lacked visibility of the location and status of their containers. Due to their large and complex organizational structure, they found it difficult to streamline the container tracking process.

They had their containers docked across places such as ports, third-party warehouses, or their own storage spaces. But they did not have a centralized system to track and manage these containers or any information regarding these containers.

This led to inefficiencies and high costs when they had to reassign the containers to another ship panel. Many containers were left on the port or other leased spaces without proper planning or optimization.

The project aimed to solve this problem by creating a digital platform where every source could upload their data regularly, regardless of the format. This enabled them to have a clear picture of where each container was and how to optimize the reassignment process. The priority was to clear the containers from the port, which was the most expensive place to store them, and move them to the company’s own space or third-party spaces, which were cheaper alternatives.
 

Embitel Solution

We developed a digital solution that automates the tracking and management of the customer’s containers efficiently. As a pilot project, they decided to deploy the solution for their operations in the US.

For our customer, this resulted in less waiting time for drivers, more savings by retrieving containers that incurred extra fees, and more. Our team has designed tools to monitor these operations on a real-time dashboard.

The dashboard helps to address the following processes:

  1. Predicting and planning ahead for incoming containers using graphs and tables as a tool to identify peak periods on the port. We did this by:
    • Integrating data from 2 data sources and transforming the manual calculations into an algorithm to show this data in real time.
  2. Managing driver capacity and scheduling of containers for better visibility and action using KPI’s, graphs and tables. We accomplished this by:
    • Filtering the data to get the relevant information. Creating rules for the end user to filter through the table based on the desired information. For example, the end user can filter by appointment date to display results only with a specific appointment date.
    • Using color coding of graphs, the correct metrics are calculated in the back end to give the user an intuitive understanding of the state of appointments in the next days.
  3. Tracking and monitoring of containers and trucks to know exactly where and when containers are and for how long, using a map with additional information in tables.
    • By combining the data from a few different sources, we were able to collect and consolidate the driver and yard information to draw the live data of containers and drivers on the map.
    • By creating tables that store the information, we were able to create rules so that the end user would be able to see drivers and containers that have been idle so that these could be investigated.

The administrators can easily find the information they need by using the features on their website. They have search options, filters and user interface layers that help them access the relevant data from the databases.
 

Embitel Impact

This project has resulted in total savings of $1.18M to our customer in just one week and for one particular location.

The project’s success reinforced our customer’s belief in our capabilities, and we are now working on Phase 2 of the project.

Tools and Technology

The Data Engineering

Tech stack includes:

  • Azure platform
  • Azure Data factory
  • Python