Supplemental Materials - A Review of Data Analytic Applications in Road Traffic Safety. Part 1: Descriptive and Predictive Modeling

In this review paper, we attempt to provide a comprehensive review on transportation research and optimization models. This website serves as the supplementary materials to create reproducible examples in the review paper:

Mehdizadeh, A.; Cai, M.; Hu, Q.; Alamdar Yazdi, M.A.; Mohabbati-Kalejahi, N.; Vinel, A.; Rigdon, S.E.; Davis, K.C.; Megahed, F.M. A Review of Data Analytic Applications in Road Traffic Safety. Part 1: Descriptive and Predictive Modeling. Sensors 2020, 20, 1107.

This vignette includes examples on the following five aspects:

  1. Bibliographic analysis: Bibliometric Summary for Transportation Safety, Bibliographic Network Matrices - Journal Names, keyword co-occurrences network, a Conceptual Structure Map
  2. Extracting online transportation safety data: Crash-related data, traffic flow data, and weather data.
  3. An example of clustering
  4. Statistical methods: logistic regression and Poisson regression

To maximize the readability of this website, we hided all R codes by default, but readers can look into any code by clicking the code button. The data folder is not uploaded to the GitHub repository since the files exceed the limited size, but all the data are open access and interested readers can download all the files at the link given in each section.

1 Bibliographic analysis


To perform a quick bibliometric analysis on Clarivate Analytics Web of Science (WoS), we searched transportation safety publications in WoS Core Collection using the following combination of words without limiting the document type, years, language:

("hazmat transportation" OR "real-time crash prediction" OR ("vehicle routing" AND safety))

This resulted in a downloadable plain .txt file (we named it ‘savedrecs.text’ under data folder), containing the full records with cited references for 992 results (conducted at 7/30/2018 - 11:08 am ET). We then used the R package bibliometrix to conduct a bibliometric analysis on this topic.

1.1 keyword co-occurrences

pacman::p_load(bibliometrix,tidyverse,data.table,rvest) 

df.trans.safety <- readFiles("savedrecs.txt")
M <-convert2df(df.trans.safety)

NetMatrix <- biblioNetwork(
  M, analysis = "co-occurrences", network = "keywords", sep = ";")

net=networkPlot(
  NetMatrix, normalize="salton", weighted=NULL, n = 60,
  Title = "Keyword Co-Occurrences", type = "auto", 
  size=15, size.cex=T, remove.multiple=TRUE,labelsize=0.75,
  label.n=60,label.cex=F, cluster="optimal",edges.min = 5, 
  label.color = TRUE, halo = TRUE)
netstat <- networkStat(NetMatrix)
out <- capture.output(summary(netstat, k=60))
A keyword co-occurrence network of the literature, depicting the 60 most used keywords.

1.2 A conceptual structure map

Creating a Conceptual Structure Map from the Titles Using the MCA Method with terms mentioned at least 25 times in the title

CS <- conceptualStructure(
  M,field="ID_TM", method="MCA", minDegree=20, 
  k.max=5, labelsize=15, documents = 856)
A data-driven conceptual structure map based on Keywords Plus

2 Extracting online transportation safety data


This section provides sources for online open-access transportation data, including both historic and real-time crash-related, traffic flow, and weathaer data. We also provided R codes to read different formats of data and convert them to compatible comma separated value (.csv) files.

2.2 Traffic flow data

Historical data (yearly)

FHWA has provided Annual Average Daily Traffic (AADT) from 2011 to 2017. As an illustration, the following code chunk displays the first five observations of AADT data for Missouri 2017.

# Using Missouri as an example
fhwai = foreign::read.dbf("data/missouri2017/Missouri2017.dbf")

datatable(
  head(fhwai),
  caption = 'Historical traffic data in Missouri, 2017',
  class = 'cell-border stripe', 
  extensions = c('Buttons', 'ColReorder'),
  options = list(
    dom = 'Bfrtip', colReorder = TRUE, scrollX = TRUE,
    buttons = c('copy', 'csv', 'excel', 'pdf', 'print')
  ))

The downloaded “shape files” can be converted to different data formats (e.g., .csv) using the following R code.

data.table::fwrite(fhwai, "data/missouri2017/fhwai.csv")

Real-time data (<= 5 minutes)

There are several sources for getting real-time traffic data. Some of the states in the USA are equipped with loop detectors and video cameras. Departments of Transportation (DoT) can provide this data.

Further, HERE website also provides near real-time traffic data with the limitation of 250,000 APIs per month for free. The HERE Traffic API provides traffic flow and incidents information. It also allows the users to request traffic map tiles. The HERE Traffic API provides four types of traffic data:

  • Traffic Incident Data: the type and location of each traffic incident, status, start and end time, and other relevant data
  • Traffic Map Tile Overlays (Traffic Tiles): pre-rendered map tile overlays with traffic information
  • Traffic Flow Data: real-time traffic flow data, including speed, congestion, geometry of the road segments
  • Traffic Flow Availability: traffic flow information, excluding incidents in an area

The HERE API do not provide a package/interface in R. Queries data from HERE API in R may need messy and highly customized code, so we do not provide R code here.

2.3 Weather data

2.3.1 Real-time (minutes)

The DarkSky API

In this part, we attempt show getting both historical and real-time weather data using the DarkSky API. It can be used in both Python and R. Before using the DarkSky API to get weather data, you need to register for a API key on its official website. The first 1000 API requests you make each day are free, but each API request over the 1000 daily limit will cost you $0.0001, which means a million extra API requests will cost you 100 USD.

To get weather data from the DarkSky API, you need to provide 1) latitude, 2) longitude, 3) date and time. Then you can pass these three parameters to the get_forecast_for() function in darksky package in R. For each observation (a combination of latitude, longitude, date and time), the darksky API returns a list of 3 data.frames:

  1. hourly weather. 24 hourly observations for each 15 weather variables in that day.
  2. daily weathe. 1 observations for each 34 weather variables in that day.
  3. current weather. 1 observations for each 15 weather variables at the assigned time point.

The variables include: apparent (feels-like) temperature, atmospheric pressure, dew point, humidity, liquid precipitation rate, moon phase, nearest storm distance, nearest storm direction, ozone, precipitation type, snowfall, sun rise/set, temperature, text summaries, uv index, wind gust, wind speed, wind direction

Sample data

source("private/DarkSkyAPIkey.R")
Sys.setenv(DARKSKY_API_KEY = myDarkSkyAPIkey) # you need to use your own "myDarkSkyAPIkey"

dat = structure(list(
  latitude = c(41.3473127, 41.8189037, 32.8258477, 40.6776808, 40.2366043), 
  longitude = c(-74.2850908, -73.0835104, -97.0306677, -75.1450753, -76.9367494), 
  time = structure(c(1453101738, 1437508088, 1436195038, 1435243088, 1454270680), 
  class = c("POSIXct", "POSIXt"), tzone = "UTC")), 
  row.names = c(NA, -5L), class = "data.frame"
)

weather_dat <- pmap(
   list(dat$latitude, dat$longitude, dat$time),
   get_forecast_for)

datatable(
  dat,
  caption = 'Sample data to demonstrate the DarkSky API',
  class = 'cell-border stripe', 
  extensions = c('Buttons', 'ColReorder'),
  options = list(
    dom = 'Bfrtip', colReorder = TRUE, scrollX = TRUE,
    buttons = c('copy', 'csv', 'excel', 'pdf', 'print')
  ))%>% 
  formatStyle( 0, target= 'row', lineHeight='80%')

currently

datatable(
  head(weather_dat[[1]]$currently),
  caption = 'Current weather provided by the DarkSky API',
  class = 'cell-border stripe', 
  extensions = c('Buttons', 'ColReorder'),
  options = list(
    dom = 'Bfrtip', colReorder = TRUE, scrollX = TRUE,
    buttons = c('copy', 'csv', 'excel', 'pdf', 'print')
  ))%>% 
  formatStyle( 0, target= 'row', lineHeight='80%')

hourly

datatable(
  head(weather_dat[[1]]$hourly),
  caption = 'Hourly weather provided by the DarkSky API',
  class = 'cell-border stripe', 
  extensions = c('Buttons', 'ColReorder'),
  options = list(
    dom = 'Bfrtip', colReorder = TRUE, scrollX = TRUE,
    buttons = c('copy', 'csv', 'excel', 'pdf', 'print')
  ))%>% 
  formatStyle( 0, target= 'row', lineHeight='80%')

daily

datatable(
  head(weather_dat[[1]]$daily),
  caption = 'Daily weather provided by the DarkSky API',
  class = 'cell-border stripe', 
  extensions = c('Buttons', 'ColReorder'),
  options = list(
    dom = 'Bfrtip', colReorder = TRUE, scrollX = TRUE,
    buttons = c('copy', 'csv', 'excel', 'pdf', 'print')
  ))%>% 
  formatStyle( 0, target= 'row', lineHeight='80%')

2.3.2 Historical (daily)

NOAA

3 A clustering example


The following codes attempts to replicate the visual clustering approach by Van Wijk and Van Selow (1999)

Van Wijk, Jarke J., and Edward R. Van Selow. 1999. “Cluster and Calendar Based Visualization of Time Series Data.” In Information Visualization, 1999.(Info Vis’ 99) Proceedings. 1999 IEEE Symposium on, 4-9. IEEE.

A brief example of applying EDA methods on traffic data is provided here. The goal of this example is to illustrate the efficiency of the mentioned tools in the transportation context. There is no predetermined way to utilize these methods. The efficiency of each method highly depends on the nature of the problem. Hence, the challenge is to choose the right tool which fits the best.

3.1 Collecting Data

Hourly vehicle counts data is used in this example. It provides the number of vehicles which passed along a particular segment of a road in one hour. Data is extracted from the Georgia Department of Transportation (GDoT) (Georgia Department of Transportation, 2015) for 2015 from station 121-5505 which located in Atlanta. GDoT provides data in separate sheets for each month. After extracting and cleaning data, it was combined to one sheet with 365 rows (days) and 24 columns (hours). Data can be downloaded from GDoT.

3.2 Clustering

It is almost impossible to understand raw data and also discover interesting patterns in it by just looking at 8760 (370 * 24) data cells. Hence, K-means clustering method is utilized here to present data in a more understandable format. K-means clustering is a common technique to explore data and discover patterns by grouping similar data to predefined (k) number of clusters. K-means clustering aims to group data into k clusters in a way to minimize the within-cluster sum of squares (WCSS). To find the optimal number of clusters, we have used a method that was suggested by Pham, Dimov, and Nguyen (2005). According to the following graph two is the best number of clusters to group this data.

trafficflow.df = "data/georgia-TFdata-station-121-5505-Yr2015.csv" %>% 
  readr::read_csv() %>% 
  mutate(Date = lubridate::dmy(paste0(Date, '-2015')))

opt = Optimal_Clusters_KMeans(
  as.data.frame(trafficflow.df[,4:27]), max_clusters = 10, 
  plot_clusters = T, criterion = 'distortion_fK', fK_threshold = 0.85,
  initializer = 'optimal_init', tol_optimal_init = 0.2,
  max_iters = 10000)

num_clusters <- which.min(opt) # Based on the results, we should use k=2 clusters in kmeans
km = KMeans_arma(
  as.data.frame(trafficflow.df[,4:27]), clusters = num_clusters, 
  n_iter = 10000, seed_mode = "random_subset", verbose = F, CENTROIDS = NULL)
pr = predict_KMeans(data.frame(trafficflow.df[,4:27]), km)
trafficflow.df$cluster.num <- as.vector(pr) %>% as.factor()
table(trafficflow.df$cluster.num)
## 
##   1   2 
## 119 246

3.3 Visualization

Now, k-means clustering can be applied. The output of this step is a column which its value is either one or two,indicating that each row of data (day) belongs to cluster one or two. Now data is divided into two groups. However, still we need to transfer data to a visual format to somehow validate and guide the clustering process. Since our data contains temporal information, we have used Cluster Calendar View visualization technique which is introduced by Van Wijk and Van Selow (1999). In this technique, a calendar represents the temporal information of data and by using color coding, differences between clusters are distinguished. The following graph shows a cluster calendar view for our data. It clearly has found meaningful patterns in the vehicle counts data. Weekends and weekdays have different traffic patterns. Besides, it has captured some of the holidays. For example, the 4th of July (Independence Day) which is a weekday, is colored by light blue. It means that this day has a similar traffic pattern with weekends. In addition, the clustering method has identified other holidays like Martin Luther King Day, Memorial Day, Labor Day, Thanksgiving Day and Christmas Day.

Sys.setlocale(locale="English")
## [1] "LC_COLLATE=English_United States.1252;LC_CTYPE=English_United States.1252;LC_MONETARY=English_United States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252"
col.brewer.pal <- brewer.pal(11, "Paired")
p2 <- ggcal(trafficflow.df$Date, trafficflow.df$cluster.num) + 
  theme(legend.position="top") +
  scale_fill_manual(values = c("1"=col.brewer.pal[1], "2"=col.brewer.pal[2]))
p2

Furthermore, a line chart (following graph) is used to show the average hourly traffic data for the two clusters. Results show that each cluster has different peaks and valleys. On the weekdays, 7 AM and 4 PM have the greatest number of vehicles which can be explained by the official working hours. On the other hand, on weekends, the traffic peak is around 1 PM which maybe refers to some people going out for lunch.

summary.df <- group_by(trafficflow.df,cluster.num)
summary.df <- summarise_all(summary.df,funs(mean))
## Warning: `funs()` was deprecated in dplyr 0.8.0.
## Please use a list of either functions or lambdas: 
## 
##   # Simple named list: 
##   list(mean = mean, median = median)
## 
##   # Auto named with `tibble::lst()`: 
##   tibble::lst(mean, median)
## 
##   # Using lambdas
##   list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
plot.df <- subset(summary.df, select = -c(2:4))
plot.df <- melt(plot.df, value.name="Traffic.Flow",
                        variable.name="Hour",id.vars="cluster.num")
## Warning in melt(plot.df, value.name = "Traffic.Flow", variable.name = "Hour", :
## The melt generic in data.table has been passed a tbl_df and will attempt
## to redirect to the relevant reshape2 method; please note that reshape2 is
## deprecated, and this redirection is now deprecated as well. To continue using
## melt methods from reshape2 while both libraries are attached, e.g. melt.list,
## you can prepend the namespace like reshape2::melt(plot.df). In the next version,
## this warning will become an error.
plot.df$cluster.num <- as.factor(plot.df$cluster.num)

p1 <- plot.df %>% 
  ggplot(aes(x = Hour, y = Traffic.Flow, group=cluster.num, color=cluster.num)) + 
  geom_line(size=2) + theme_bw() + 
  theme(legend.position="top", axis.text.x=element_text(angle=90, hjust=1)) +
  scale_color_brewer("Paired")
p1

To sum up, it seems that K-means clustering method was very efficient here. We applied raw data as inputs to this method and as outputs we could discover patterns (weekdays and weekends traffic patterns) and also with the help of visualization technique we obtained a considerable information about the data.

4 Statistical models


Logistic regression and Poisson regression are two most commonly used statistical models in crash risk prediction studies. In this section, we provides introductory statistical theories on the two models, and

4.1 Logistic regression

Theory

The most common model for crash risk prediction studies is logistic regression, where the response is \(Y_i=1\) if a crash occurred in a given segment/time period, and \(Y_i=0\) if no crash occurred. In logistic regression we assume that the logit of the probability of a traffic crash is a combination of a linear predictor variables:

\[ Y_i \sim \text{Bernoulli}(p_i)\\ \text{logit}(p_i) = \log \left( \frac{p_i}{1-p_i}\right) = \textbf{X}^{\prime} \boldsymbol{\beta} \]

Here

\[ \textbf{X}^{\prime} \boldsymbol{\beta} = \beta_0 + \sum_{j=1}^{p} x_{ij} \beta_j \]

where the \(x_{ij}\) are covariates. A simple linear regression can be conducted using the following R code.

R code

# simulate data
set.seed(1853)
n = 10000
fatigue = rnorm(n, 3, 1)
precipitation = rbeta(n, 1, 5)
traffic = rgamma(n, 5, 1)
p = gtools::inv.logit(-5 + 0.5*fatigue + 0.2*traffic + 0.5*precipitation)
Y = rbernoulli(n, p)

logit_dat = data.frame(Y, fatigue, traffic, precipitation)

# logistic regression
logit_fit <- glm(
  Y ~ fatigue + traffic + precipitation,
  family = "binomial", data = logit_dat)
summary(logit_fit)
## 
## Call:
## glm(formula = Y ~ fatigue + traffic + precipitation, family = "binomial", 
##     data = logit_dat)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.3396  -0.4780  -0.3840  -0.3019   2.9676  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   -4.95964    0.15678 -31.634  < 2e-16 ***
## fatigue        0.46204    0.03533  13.078  < 2e-16 ***
## traffic        0.20849    0.01372  15.199  < 2e-16 ***
## precipitation  0.67648    0.23457   2.884  0.00393 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 6319.6  on 9999  degrees of freedom
## Residual deviance: 5917.5  on 9996  degrees of freedom
## AIC: 5925.5
## 
## Number of Fisher Scoring iterations: 5

For a detailed explanation on interpreting the results of a logistic regression, the readers can refer to the example by the UCLA.

4.2 Poisson regression

Theory

Another widely used model for crash risk prediction studies is Poisson regression, where the response \(Y_i\) is the number of crashes.

\[ Y_i \sim \text{Poisson}(T*\lambda)\\ \log\lambda = \textbf{X}^{\prime} \boldsymbol{\beta} \]

Here \(T\) is the length/time of a trip. It is an exposure variable that indicates the number of crashes could have occurred.

R code

# simulate data
set.seed(123)
n = 500
fatigue = rnorm(n, 3, 1)
precipitation = rbeta(n, 1, 5)
traffic = rgamma(n, 5, 1)
trip = rnorm(n, 6, 1)
explambda = exp(-5 + 0.5*fatigue + 0.2*traffic + 0.5*precipitation)
Y = rpois(n, lambda = explambda*trip)

pois_dat = data.frame(Y, trip, fatigue, traffic, precipitation)

# Poisson regression
poisson_fit <- glm(
  Y ~ fatigue + traffic + precipitation, 
  offset = trip, family = "poisson", data = pois_dat)
summary(poisson_fit)
## 
## Call:
## glm(formula = Y ~ fatigue + traffic + precipitation, family = "poisson", 
##     data = pois_dat, offset = trip)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.9729  -0.7845  -0.4613   0.8670   3.0791  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   -9.60355    0.23534 -40.808   <2e-16 ***
## fatigue        0.45258    0.05234   8.647   <2e-16 ***
## traffic        0.21955    0.02016  10.890   <2e-16 ***
## precipitation  0.33890    0.33963   0.998    0.318    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 856.41  on 499  degrees of freedom
## Residual deviance: 673.09  on 496  degrees of freedom
## AIC: 1167
## 
## Number of Fisher Scoring iterations: 6

For a detailed explanation on interpreting the results of a Poisson regression, the readers can refer to the example by the UCLA.

5 Optimization

Since optimization code is mostly Python-based, we do not provide code on the optimization section here. Should the readers have interest in this part, they can contact our optimization collaborators or refer to the supplementary materials for the part 2 of our review. The part 2 of the review (optimization section) is:

Hu, Q.; Cai, M.; Mohabbati-Kalejahi, N.; Mehdizadeh, A.; Alamdar Yazdi, M.A.; Vinel, A.; Rigdon, S.E.; Davis, K.C.; Megahed, F.M. A Review of Data Analytic Applications in Road Traffic Safety. Part 2: Prescriptive Modeling. Sensors 2020, 20, 1096

Acknowledgement

This work was supported in part by the National Science Foundation (CMMI-1635927 and CMMI-1634992), the Ohio Supercomputer Center (PMIU0138 and PMIU0162), the American Society of Safety Professionals (ASSP) Foundation, the University of Cincinnati Education and Research Center Pilot Research Project Training Program, and the Transportation Informatics Tier I University Transportation Center (TransInfo). We also thank the DarkSky API for providing us five million free calls to their weather database.

References

Pham, Duc Truong, Stefan S Dimov, and Chi D Nguyen. 2005. “Selection of k in k-Means Clustering.” Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 219 (1): 103–19.
Van Wijk, Jarke J, and Edward R Van Selow. 1999. “Cluster and Calendar Based Visualization of Time Series Data.” In Proceedings 1999 IEEE Symposium on Information Visualization (InfoVis’ 99), 4–9. IEEE.

  1. Department of Industrial and Systems Engineering, Auburn University. Email address azm0127@auburn.edu↩︎

  2. Department of Epidemiology, School of Public Health, Sun Yat-sen University. miao.cai@outlook.com↩︎

  3. Department of Industrial and Systems Engineering, Auburn University. qzh0011@auburn.edu↩︎

  4. Carey Business School, Johns Hopkins Universitymza0052@auburn.edu↩︎

  5. Department of Industrial and Systems Engineering, Auburn University.nzm0030@auburn.edu↩︎

  6. Department of Industrial and Systems Engineering, Auburn University.alexander.vinel@auburn.edu↩︎

  7. Department of Epidemiology and Biostatistics, Saint Louis University.steve.rigdon@slu.edu↩︎

  8. Department of Computer Science and Software Engineering, Miami University.davisk4@miamioh.edu↩︎

  9. Farmer School of Business, Miami University. Email address fmegahed@miamioh.edu.↩︎