The main goal of the Automotive Use Case of Next Perception is to develop a driver monitoring system, which is able to understand the visual and cognitive distraction and emotional state of the driver. We implemented different kinds of sensors and processing modules on the driving simulator, in order to understand the driver’s status.
One of the main components of the resulting system is the processing module is named Joint Driver-Vehicle Status Classifier (JDVS) and oversees both the driver and the vehicle. By the JDVS, the driver is monitored through a thermal camera and a heart rate wearable sensor device: by these sensors, driver’s data are collected and integrated to estimate the overall arousal index.
Another component of the DMS is the visual distraction module: it utilizes machine learning models to detect the driver’s body parts— such as the head and the hands — and other kinds of objects, like the smartphone. The module produces four different kinds of dangerous events: use of the central display, usage of the smartphone, driving with only one hand, and driver looking reverse.
Besides visual distraction, the driver cognitive distraction is also monitored: an experimental module uses artificial intelligence methods to combine the driver’s visual distraction from a frontal camera with the vehicle dynamics and trajectory to estimate the cognitive distraction level of the driver.
Finally, the DMS has been equipped by a module dedicated to recognizing the emotions and level of engagement of the driver, thanks to the analysis of his or her facial expressions and head orientation.Then, all data collected are used to estimate the overall Fitness to Drive Index by a dedicated module.
The fitness to drive module is a comprehensive system that assists the driver’s ability to operate vehicles safely. It aggregates and synchronizes data coming from the previous modules. With a score of one indicating the perfect driving ability and a lower score indicating higher risks of accidents, the fitness to drive index can be used to understand if the driver is fit to drive, i.e., if the driver is able to drive safely.
All the sensors of the DMS architecture are integrated in the driving simulator to monitor a driver’s behaviour and reactions while driving in simulation trials. The simulator is equipped with real car commands, including a steering wheel with force feedback, throttle, brake and clutch pedals, a manual gear shifter and turn signals levers. Moreover, the architecture includes a projector screen that shows the scenario environment and a display placed behind the steering wheel, which is used to showcase the instrument cluster of the human machine interface.
Data coming from the fitness to drive module and from the driving simulator are exploited by a Decision Support System (DSS) that provide the proper action to take (i.e., take over, breaking, …) and adapts the Human Machine Interface (HMI) of the vehicle. The DSS is able to perform long-term analysis of the driving capabilities of the drivers, highlighting the individual factors that affect the decision-making process. To reach this goal, an assessment of the data and of the complementary data needs has been carried out.
The Next Perception HMI implements a multimodal approach to nudge the driver to keep a safe state while driving. That means avoiding distractions and coping with happiness and high intensity emotions that might have a negative impact on the safety performance of driving. Vocal interaction, ambient lighting and visual representation of the driver’s state are used to achieve that purpose.
If you are interested in collaborating with us or if you would like information about our services, please contact us and we will be happy to help. Let’s get in touch and make something great happen.