Summary

The promise of a car that can drive on its own seems closer than ever. While Autonomous Vehicles (AVs) are currently being tested on carefully plotted out test areas, they cannot quite mimic the way real people feel when they drive. The challenge is to infuse cars with the intelligence they need to react to the environment similar to how people do. We propose that by analyzing long-term location-based driver behavior and understanding how humans drive, it is possible to make the autonomous driving experience more familiar and comfortable. To address such concerns, AVs need to put humans at the center stage where they optimize the driving experience around the driver/passenger behaviors, needs, and preferences which may dynamically change based on different spatial and temporal factors. The goal of this project is to utilize both human and machine advantages to humanize autonomy by instilling the beneficial nuances of human behavior, emotions, and trust with the technological and safety benefits of AVs. Behavior-guided AVs will bring human factors such as emotions, behaviors, and trust into the autonomous loop, where AVs can enhance the passenger experience, safety, and comfort. In this project, we are building models to predict driver behavior, and emotional changes in response to different environmental conditions, and to automatically infer their preferences. To achieve this we are conducting longitudinal naturalistic driving studies to identify correlations of certain behaviors as a result of changes in environmental conditions. Our naturalistic study platform collects videos from both outside and inside the car, physiological data of the driver, audio data from both music players and environment noise as well as driving behavior metrics through OBD port such as speed, acceleration and tachometer profiles.

How do we analyze our results?

We analyze inside videos of the car to retrieve driver’s gaze, emotions, and movements. We retrieve driver’s gaze using OpenFace, driver’s facial emotions using Affectiva, and driver’s movements using OpenPose. We analyze outside videos using both manual annotation and automatic approaches such as object detection and semantic segmentation. From the wearable, we retrieve the heart rate as well as hand acceleration magnitudes. Using the conditions identified in the outside and inside videos (e.g. weather condition, road condition, number of passengers), we analyze the variation in emotions and heart rate variability under different environmental conditions

Publications

Tavakoli, A., Balali, V., & Heydarian, A. (2019). “A Multimodal Approach for Monitoring Driving Behavior and Emotions” (No. 19-05204).

Team Members

Arash Tavakoli

PhD Student

Xiang Guo

PhD Student