Summary

The promise of a car that can drive on its own seems closer than ever. While Autonomous Vehicles (AVs) are currently being tested on carefully plotted out test areas, they cannot quite mimic the way real people feel when they drive. The challenge is to infuse cars with the intelligence they need to react to the environment similar to how people do. We propose that by analyzing long-term location-based driver behavior and understanding how humans drive, it is possible to make the autonomous driving experience more familiar and comfortable. To address such concerns, AVs need to put humans at the center stage where they optimize the driving experience around the driver/passenger behaviors, needs, and preferences which may dynamically change based on different spatial and temporal factors. The goal of this project is to utilize both human and machine advantages to humanize autonomy by instilling the beneficial nuances of human behavior, emotions, and trust with the technological and safety benefits of AVs. Behavior-guided AVs will bring human factors such as emotions, behaviors, and trust into the autonomous loop, where AVs can enhance the passenger experience, safety, and comfort. In this project, we are building models to predict driver behavior, and emotional changes in response to different environmental conditions, and to automatically infer their preferences. To achieve this we are conducting longitudinal naturalistic driving studies to identify correlations of certain behaviors as a result of changes in environmental conditions. Our naturalistic study platform collects videos from both outside and inside the car, physiological data of the driver, audio data from both music players and environment noise as well as driving behavior metrics through OBD port such as speed, acceleration and tachometer profiles.

HARMONY 1.0 is the current version of our driving sensing platform. This platform collects data from 3 major sources: (1) Camera, (2) Smartwatch, (3) Music API. Our platform uses a BlackVue DR750S-2CH, which is a commercially available dual dash camera. It records both inside and outside of the cabin simultaneously. BlackVue DR750-2CH supports up to 256 GB of SD card memory, which makes it suitable for long term data collection. This camera has a built-in GPS that connects to GPS as the car turns on. This will sync the timing of the device with the global GPS timing so that the camera always has the current timing. The camera does not have a LCD which decreases both chances of distraction by the LCD and possible feeling of being monitored by the participant. Moreover, it has the option of disabling the audio recording. In addition, BlackVue records the speed of the car from the output of the built-in GPS. The speed is overlayed on the recorded video. The camera records videos with 30 fps, Full HD resolution in 3-minute segments. Each 3 minutes segment of driving is saved as a joint video of inside and outside the environment. This camera has the cloud capability which can be used to record GPS, speed, and videos to a cloud server. This option has not been used at the moment but will be explored in the future for further data storage. Lastly, this camera will turn on and off automatically with the car’s engine start and stop.

Our platform uses an android smartwatch that is equipped with “SWEAR”, an in-house designed app for collecting long-term data from smartwatches (A). This app is designed by Professor Boukhechba, who is also one of the LinkLab members . SWEAR smooths the data collection by adding the ability of changing each sensor’s data collection frequency to the desired frequency. SWEAR records heartrate , hand acceleration, audio amplitude (noise level), light intensity, location, and gyroscope. Participants are required to start/stop the data collection for every session of driving (B). The smartwatch saves every segment of driving data on the watch and shows the participant’s ID, and number of saved files on the status tab (C). Every participant is required to sync the watch with their phone. The watches are all registered on the university wireless network. Using both the smartphone of the participant and the wifi, the time of the watch will be synced to the current time at any moment. In this way the watch always shows the current timing. Every two weeks, the participant is required to transfer their data to our system by clicking on the upload icon on the settings tab (D). Then using participant’s PID the data will be recorded in our system.

Where have our participants driven so far?

The data has been mostly collected from eastern and northeastern regions of the United States including states of Virginia, Pennsylvania, Delaware, West Virginia, Indiana, Illinois, Ohio, Vermont, New Hampshire, Maine, and New York. This data has been collected over the period of June 2019 to present. The most recent map of the collected data to date is as follows. This map is gradually updated as the data collection is an on-going process.

Presentations

Tavakoli, A., Heydarian, A., & Balali, V. (2019), How Do Environmental Factors Affect Driver’s Gaze Direction and Head Movements? A Long-Term Naturalistic Driving Study, Transportation Research Board Annual Meeting 2020

Publications

Tavakoli, A., Boukhechba, M., & Heydarian, A. (2020, July). Personalized Driver State Profiles: A Naturalistic Data-Driven Study. In International Conference on Applied Human Factors and Ergonomics (pp. 32-39). Springer, Cham.

Tavakoli, A., Balali, V., & Heydarian, A. (2019). “A Multimodal Approach for Monitoring Driving Behavior and Emotions” (No. 19-05204).

Reason-Aware Annotation for Contextualizing Driving Scenarios

Contextualizing Driving Scenarios

ORIEL: an Open-Source Multi-modal Driver’s Activity Recognition Classifier in the Wild

Team Members

Arsalan Heydarian

Assistant Professor

Principal Investigator

Arash Tavakoli

PhD Student

Graduate Research Assistant

Xiang Guo

PhD Student

Graduate Research Assistant