Friday, October 30, 2015

Curious about big data in Montreal?

Are you in Montreal and curious about big data? Well here is your chance to attend a session about the same at Concordia University on Tuesday, Nov. 03 at 6:00 pm., which is an IBM-led initiative is running meetups across North America to create awareness about, and training in, big data analytics.

BigDataUniversity runs MOOCs and through its online data scientist workbench provides access to python, R, and even Spark. Also, you can learn about Watson Analytics and see how you can work with the state-of-the-art in computing.

Further details are available at:

Getting started with Data Science and Introduction to Watson Analytics

When: Tuesday, November 3rd at 6-9 PM

Where: H1269, 12th floor of the Hall Bldg 
(1455, blvd. De Maisonneuve ouest - Metro Guy-Concordia)

Wednesday, May 20, 2015

Are Canadian newspapers painting false pictures with data?

The Canadian newspaper, Globe and Mail, is a leader in diction and style, but it may need improvement in the ‘grammar of graphics’.

Globe’s recent depiction of metropolitan economic growth in the series Off the Charts was way off the mark. The chart plotted the current and forecasted GDP growth rates for select cities in Canada. The red-coloured upward sloping lines depicted cities with increasing economic growth rates and the grey-colored downward sloping lines highlighted those with slowing economic growth.

There is, however, a small problem. The chart erroneously showed some slowing economies as growing and vice versa. Furthermore, the trajectory of the sloping lines would mislead the readers to assume that cities with parallel lines enjoyed a similar increase in the growth rate, which, of course, is not true. The graphical faux pas was certainly avoidable had a bar chart were used.
Source: The Globe and Mail, Page B6, May 15.

Of course, the Globe and Mail is not alone in coming up with math that simply doesn’t add up. While covering the Scottish independence vote in September 2014, CNN reported that Scots voted a 110% in the referendum such that 58% voted yes and another 52% voted no.
Source: Mail Online. September 19, 2014

The recent rise of data journalism has witnessed the emergence of data visualization where the editors increasingly reinforce narrative with creative info-graphics. While major news outlets such as The Economist, The New York Times, and the Wall Street Journal retained experts in data science and visualization, most newspapers have entrusted the task to the graphics departments that rely on tools that are not specifically designed for data visualization. At times, the outcome is math- and logic-defying graphics that present a false picture.

Even when charts correctly depict data, at times the visualizations are too complex for the ordinary newsreader to grasp. Powerful data visualizations tools, such as D3 (a JavaScript library) are often abused to create graphics too rich in detail to comprehend. The use of Hierarchical Edge Bundling, for instance, is becoming increasingly popular in the news media resulting in complex graphics that are visually impressive, but conceptually confusing.

Edward Tufte and Leland Wilkinson have spent a lifetime advising data enthusiasts on how to present data-driven information. Wilkinson is the author of The Grammar of Graphics, which sets out the fundamentals for presenting data. Wilkinson’s writings inspired Hadley Wickham to develop ggplot2, a graphing engine for R, which is increasingly becoming the tool of choice for data scientists. 

Tufte inspired Dona M. Wong, who was the graphics director at the Wall Street Journal. Ms. Wong authored The Wall Street Journal Guide to Information Graphics. Her book is a quintessential guide for those who work with data and would like to present information as charts. She uses examples from the Journal to illustrate the dos and don’ts of presenting data as info-graphics.

Let us return to the forecasted metropolitan growth rates in Canada. I prefer the horizontal bar chart instead. The bar chart offers me several options to highlight the main argument in the story. If I were interested in highlighting cities with the highest gains in growth since 2014, I would sort the cities accordingly, as is illustrated in the graphic on the left (see below). If I were interested in highlighting cities with the highest forecasted growth rate, I would sort them accordingly to result in the graphic on the right.

Dana Wong insists on simplicity in rendering. She concludes her book with a simple message for data visualization: simplify, simplify, simplify. The two bar charts simplify the same information presented by the Globe. The results are obvious: I avoid misrepresenting data. One can readily see Halifax’s economy is forecasted to grow and Vancouver’s to shrink. The Globe’s rendering depicted exactly the opposite.

Thursday, April 23, 2015

UP Express in Toronto: A train less ridden

What does a billion dollars' worth of transit investment get in Toronto? A piddly 5,000 daily riders. To put things in perspective, dozens of bus routes in Toronto carry more passengers every day than the trips forecasted for the Union-Pearson rail link (UP Express).

The rail link will connect Canada's two busiest transport hubs: The Union Station and the Pearson Airport. Despite the high-speed connector between the two busiest hubs, transport authorities expect only 5,000 daily riders on the UP Express. The King Streetcar, in comparison, carries in excess of 65,000 daily riders.

The UP Express and the Sheppard subway extension are examples of transit money well wasted. A 2009 communiqué by Metrolinx estimated that the George Town Expansion (including the UP Express) will cost over a billion dollars. The Globe and Mail reported Ontario government alone had invested $456 million in the UP Express. Instead of investing the scarce transit dollars on projects likely to deliver the highest increase in transit ridership, billions are being spent on projects that will have a marginal impact on addressing traffic congestion in the GTA.

With $29 billion in planned transport infrastructure investments, some of which will be publicised Thursday in the Ontario budget, the Province and the City need to have their priorities right. The very least would be to stop investing in projects that do not generate sufficient transit ridership.

One may argue that 5,000 fewer trips by automobile to and from the Airport should help in easing congestion in the GTA. However, with over 12-million daily trips in the GTA, 5,000 fewer trips are unlikely to make any meaningful difference in traffic congestion. At the same time, the taxpayers should focus on the cost-benefit trade-offs for transit investments. Notice the cost-benefit efficiency of the existing TTC bus service (192 Airport Rocket) to the Pearson Airport that carries over 4,000 daily passengers. A billion dollars later, the UP Express will move only one thousand additional riders.

In North America, fewer than 10 airports are connected with local subway or regional rail transit. With the exception of the Ronald Reagan International Airport in Washington, DC, most other airports accessible by rail report approximately 5% transit trips to and from airports. The European experience though has been better. Almost 35% of the trips to and from Zurich airport were made on rail-based transit. Munich airport reported 40% of the trips by rail and bus.

Certain transit network attributes, which are missing for the UP Express, contribute to the strong transit ridership to and from airports. For instance, the rail-based service to high transit ridership airports does not terminate at the airport but instead continues further to serve the communities along the corridor. In addition, the airport lines at the successful airports are integrated with the rest of the rail-based transit system, instead of being a standalone line. The UP Express is a standalone rail line that connects to only one terminal at Pearson Airport. The prohibitive fare makes the ride uneconomical for commuters travelling in teams of two or more who would find a cab ride cheaper and convenient from most parts of suburban Toronto.

Two other key factors limit the ridership potential of the UP Express. First, the Billy Bishop Airport near downtown Toronto caters to the short-haul business travel market. It has been argued in the past that business travellers originating in downtown Toronto would rather take the train than a cab to Pearson Airport. Given the frequency of service and choice of destinations served by the Billy Bishop Airport, business travellers increasingly favour the downtown airport, which eats into the UP Express potential market share.

In addition, the peak operations at Pearson Airport coincide with the morning and afternoon peak commuting times in Toronto. This implies that one would have to commute to Union Station in the morning and afternoon peak travel periods to ride the UP Express. The extra effort in time and money required to travel to downtown Toronto from the inner suburbs alone will deter riders from using the Union-Person rail link.

The UP Express is yet another monument dedicated to public transit misadventures while the region continues to suffer from gridlock. Getting the transit priorities right is necessary before Ontario dolls out $29 billion.

Thursday, April 9, 2015

Stata embraces Bayesian statistics

Stata 14 has just been released. The new and big thing with version 14 is the introduction of Bayesian Statistics. A wide variety of new models can now be estimated with Stata by combining 10 likelihood models, 18 prior distributions, different types of outcomes, and multiple equation models. Stata has also made available a 255-page reference manual for free to illustrate Bayesian statistical analysis.

Of course R already offered numerous options for Bayesian Inference. It will be interesting to hear from colleagues proficient in Bayesian statistics to compare Stata’s newly added functionality with what has already been available from R.

Given the hype with big data and the newly generated demand for data mining and advanced analytics, it would have been timely for Stata to also add data mining and machine learning algorithms. My two cents: data mining algorithms are in greater demand than Bayesian statistics. Stata users will have to wait for a year or more to see such capabilities. In the meanwhile, R offers several options for data mining and machine learning algorithms.

Monday, November 3, 2014

R: What to do when help doesn't arrive

R is great when it works. Not so much, when it doesn’t. Specifically, this becomes a concern when the packages are not fully illustrated in the accompanied help documentation, and the author/s of the package don’t respond (in time).

I am not suggesting that the package authors should respond to every email that they receive. My request is that the documentation should be complete enough so that the authors’ help is no longer required on a day-to-day basis.

Recently, a colleague in the US and I became interested in the mlogit package. We wanted to use the weights option in the package. Just like most other packages, mlogit does not illustrate how to use weights, but advises that the option is available. We assumed that the weights would work in a certain way (see page 26 of the hyperlinked document). However, when I estimated the model with weights, mlogit did not replicate the results from a popular textbook on econometrics. Here are the details.

We wanted to see if the the weights option could be used in an alternative specific logit formulation when the sampled data do not conform to the market shares observed in the underlying population? For instance, in a travel choice model, one may be tempted to over sample train commuters and under-sample car commuters because often car commuters far outnumber the train commuters for inter-city travel in the underlying population. This is true for most of Canada and the US.  In such circumstances, we would weight the data set so that the estimated model reproduces the population market shares rather than the sample shares.

The commercially available software, NLogit/LimDep can do this with ease. I wanted to replicate the results for choice-based weights for the conditional logit model in Professor Bill Greene's book, Econometric Analysis. This is illustrated on page 853 of the 6th edition of the text where Table 23.24 presents the parameter estimates for a conditional (McFadden) logit model for the un-weighted and the choice-based weighted versions. I replicated the results using NLogit with a simple addition of population market shares in the two-line syntax. However, the results generated by mlogit package bear no resemblance to the ones listed in Econometric Analysis.

It turns out that Stata is also limited in the way it handles weights for the two estimation options: asclogit and clogit. I know this because colleagues at Stata were quite diligent in responding to my requests. It’s not the same with the mlogit, which may or may not be able to handle the weights. We will only know when the author responds.

I am recommending that it should not be left to the individual authors to bear the sole responsibility for supporting the R packages. The individual could be ill, busy, or unavailable for a variety of reasons. This limitation could be proactively dealt with if the R community collectively generates help documentation by detailed worked-out examples of all available options (including weights), and not the few frequently used ones.

Improving documentation will be key to helping R branch out to the everyday users of statistical analysis. The tech-savvy can iron out the kinks. They have the curiosity, patience, and time on their hand. The rest of the world is not that fortunate.

I propose that users of the packages, and not just the authors, should collaborate to generate help documentation as vignettes and YouTube videos. This will do more in popularizing R than another 6,000 new packages that few may know how to work with.

Sunday, December 8, 2013

Summarize statistics by Groups in R & R Commander

R is great at accomplishing complex tasks. Doing simple things with R though takes some effort. Consider the simple task of producing summary statistics for continuous variables over some factor variables. Using Stata, I’d write a brief one-liner to get the mean for one or more variables using another variable as a factor. For instance, tabstat Horsepower RPM, by(Type)  in Stata produces the following:


The doBy package in R offers similar functionality and more. Of particular interest for those who teach R based statistics courses in the undergraduate programs is the doBy plugin for R Commander. The plugin was developed by Jonathan Lee and it is a great tool for teaching and for quick data analysis. To get the same output as the one listed above, I’d click on the doBy plugin to get the following dialogue box:


The dialogue box results in the following simple syntax:

summaryBy(Horsepower+RPM~Type, data=Cars93, FUN=c(mean))

You may first have to load the data set:
data(Cars93, package="MASS")

And the results are presented below:


Jonathan has also created GUIs for order by, sample by, and split by within the same plug-in. A must use plug-in for data scientists.

Comparing mnlogit and mlogit for discrete choice models

Earlier this weekend (Dec. 7, 2013), mnlogit was released on CRAN by Wang Zhiyu and Asad Hasan ( claiming that mnlogit uses “parallel C++ library to achieve fast computation of Hessian matrices”.

Here is a comparison of mnlogit with mlogit by Yves Croissant whose package seems to be the inspiration for mnlogit.

I will estimate the same model using the same data set and will compare the two packages for execution speed, specification flexibility, and ease of use.

Data set

I use the Fish data set to estimate mnlogit and mlogit. mnlogit defines the data set as follows:

A data frame containing :

  • mode - The choice set: beach, pier, boat, and charter
  • price - price for a mode for an individual
  • catch - fish catch rate for a mode for an individual
  • income - monthly income of the individual decision-maker
  • chid - decision maker ID

The authors mention that they have sourced the data from R package mlogit by Yves Croissant, which lists the source as:

  • Herriges, J. A. and C. L. Kling (1999) “Nonlinear Income Effects in Random Utility Models”, Review of Economics and Statistics, 81, 62-72.

Estimation with mnlogit


## Warning: package 'mnlogit' was built under R version 3.0.2

## Package: mnlogit Version: 1.0 Multinomial Logit Choice Models. Scientific
## Computing Group, Sentrana Inc, 2013.

data(Fish, package = "mnlogit")
fm <- formula(mode ~ 1 | income | price + catch)
summary(mnlogit(fm, Fish, "alt"))

## Call:
## mnlogit(formula = fm, data = Fish, choiceVar = "alt")
## Frequencies of alternatives in input data:
## beach boat charter pier
## 0.113 0.354 0.382 0.151
## Number of observations in training data = 1182
## Number of alternatives = 4
## Intercept turned: ON.
## Number of parameters in model = 14
## # individual specific variables = 2
## # choice specific coeff variables = 2
## # generic coeff variables = 0
## Maximum likelihood estimation using Newton-Raphson iterations.
## Number of iterations: 7
## Number of linesearch iterations: 7
## At termination:
## Gradient 2-norm = 8.19688645245397e-05
## Diff between last 2 loglik values = 4.15543581766542e-08
## Stopping reason: Succesive loglik difference < ftol (1e-06).
## Total estimation time (sec): 0.04
## Time for Hessian calculations (sec): 0.04 using 1 processors.
## Coefficients :
## Estimate Std.Error t-value Pr(>|t|)
## (Intercept):boat 8.64e-01 3.15e-01 2.74 0.00607 **
## (Intercept):charter 1.85e+00 3.10e-01 5.97 2.4e-09 ***
## (Intercept):pier 1.13e+00 3.05e-01 3.71 0.00021 ***
## income:boat -1.10e-04 6.02e-05 -1.84 0.06636 .
## income:charter -2.78e-04 6.03e-05 -4.61 4.0e-06 ***
## income:pier -1.28e-04 5.33e-05 -2.41 0.01605 *
## price:beach -3.80e-02 3.33e-03 -11.41 < 2e-16 ***
## price:boat -2.09e-02 2.23e-03 -9.33 < 2e-16 ***
## price:charter -1.60e-02 2.02e-03 -7.94 2.0e-15 ***
## price:pier -3.92e-02 3.26e-03 -12.02 < 2e-16 ***
## catch:beach 4.95e+00 8.20e-01 6.04 1.5e-09 ***
## catch:boat 2.47e+00 5.19e-01 4.76 1.9e-06 ***
## catch:charter 7.61e-01 1.52e-01 4.99 6.0e-07 ***
## catch:pier 4.88e+00 8.99e-01 5.43 5.5e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Log-Likelihood: -1160, df = 14
## AIC: 13.8875709217033

system.time((mnlogit(fm, Fish, "alt")))

##    user  system elapsed 
## 0.11 0.00 0.11

Estimation with mlogit

I estimate the same model using mlogit.


## Loading required package: Formula Loading required package: statmod
## Loading required package: lmtest Loading required package: zoo
## Attaching package: 'zoo'
## The following object is masked from 'package:base':
## as.Date, as.Date.numeric
## Loading required package: maxLik Loading required package: miscTools
## Loading required package: MASS

data2 <-, choice = "alt", shape = "long", id.var = "chid"
= c("beach", "boat", "charter", "pier"))
summary(mod1 <- mlogit(mode ~ 1 | income | price + catch, data2, reflevel = "beach"))

## Call:
## mlogit(formula = mode ~ 1 | income | price + catch, data = data2,
## reflevel = "beach", method = "nr", print.level = 0)
## Frequencies of alternatives:
## beach boat charter pier
## 0.113 0.354 0.382 0.151
## nr method
## 7 iterations, 0h:0m:0s
## g'(-H)^-1g = 8.31E-08
## gradient close to zero
## Coefficients :
## Estimate Std. Error t-value Pr(>|t|)
## boat:(intercept) 8.64e-01 3.15e-01 2.74 0.00607 **
## charter:(intercept) 1.85e+00 3.10e-01 5.97 2.4e-09 ***
## pier:(intercept) 1.13e+00 3.05e-01 3.71 0.00021 ***
## boat:income -1.10e-04 6.02e-05 -1.84 0.06636 .
## charter:income -2.78e-04 6.03e-05 -4.61 4.0e-06 ***
## pier:income -1.28e-04 5.33e-05 -2.41 0.01605 *
## beach:price -3.80e-02 3.33e-03 -11.41 < 2e-16 ***
## boat:price -2.09e-02 2.23e-03 -9.33 < 2e-16 ***
## charter:price -1.60e-02 2.02e-03 -7.94 2.0e-15 ***
## pier:price -3.92e-02 3.26e-03 -12.02 < 2e-16 ***
## beach:catch 4.95e+00 8.20e-01 6.04 1.5e-09 ***
## boat:catch 2.47e+00 5.19e-01 4.76 1.9e-06 ***
## charter:catch 7.61e-01 1.52e-01 4.99 6.0e-07 ***
## pier:catch 4.88e+00 8.99e-01 5.43 5.5e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Log-Likelihood: -1160
## McFadden R^2: 0.225
## Likelihood ratio test : chisq = 675 (p.value = <2e-16)

system.time(mlogit(mode ~ 1 | income | price + catch, data2, reflevel = "beach"))

##    user  system elapsed 
## 0.27 0.02 0.28

Speed Tests

I conducted a simple test for execution times with the command system.time. The results are reported above after each model summary.


  • I obtain identical results for the models estimated with mnlogit and mlogit.

  • Estimation speeds appear faster for mnlogit.

  • caveat: The same command (system.time) when run independently of the R Markdown environment shows mlogit to be faster than mnlogit!

  • Additional Comments: I restarted RStudio and estimated the two models again outside of R Markdown. mnlogit took  0.12 seconds versus 0.31 seconds for mlogit.

  • Verdict: mnlogit reports shorter execution times than mlogit.

  • Also, Estimation speeds may differ with large and complex data sets.

  • The model specification is simpler in mnlogit.

  • The commands for specifying the data set and the model seem easier in mnlogit.

  • mlogit syntax seems relatively complex, but offers more choices in model specification.

    Final comment

    I am delighted to see yet another package for discrete choice models. As the field evolves and more advanced problems are subjected to discrete choice models, the need for robust and computationally efficient packages will be felt more. Both mlogit and mnlogit are indeed valuable to those interested in how humans make good and bad choices!