Do we really need another B-school ranking?

June 2, 2011: 10:29 AM ET

The dean of UT Dallas' business school explains why he joined a crowded field of business school ranking competitors and what separates his approach from the others.

Interview by Neelima Mahajan-Bansal, contributor

Hasan Pirkul

Hasan Pirkul, dean of the School of Management at University of Texas at Dallas

(poetsandquants.com) -- A few years ago, Hasan Pirkul came across an article ranking the operations management departments of different universities. Pirkul, the dean of the School of Management at University of Texas at Dallas, thought the ranking was incomplete, but it got him thinking. What if he could compile a database ranking various business schools by their research output in leading journals? "This way we, as B-schools, could benchmark ourselves based on the publications in these journals," he says.

Pirkul and his colleague Varghese S. Jacob, senior associate dean at the School of Management, put their heads together and the UT-Dallas Top 100 Business School Research Rankings was born in 2005. BusinessWeek and The Financial Times include an academic research component in their rankings, but the UT-Dallas take may well be the single best measurement of business school scholarship currently available.

The latest global survey, which tracks published articles over the past five years from 2006 to 2010, was updated this past March. It has Wharton in the lead, followed by the business schools at Duke, Michigan, NYU, and Harvard. Only two non-U.S. schools made the top 20: INSEAD at No. 10 and the Hong Kong University of Science and Technology at No. 18. 

Like every other ranking, this one is not without controversy. For one thing, the list fails to take into account the size of the faculty at each school. It's silly to compare Wharton, with 459 faculty members, to Dartmouth's Tuck, with just 76 professors, without adjusting the data for size.

And there is the issue of UT Dallas itself. How is it possible that this business school -- whose full-time MBA program fails to make four of five of the major rankings -- is ranked 16th, beating out Berkeley, UCLA, Dartmouth, Cornell, and Yale? In the following interview, Pirkul explains the rationale behind his research rankings and responds to the criticism:

In a world already cluttered with too many business school rankings, what spurred you to launch a new one?

When we started our rankings, there really was no place where you could check a school's research productivity. The rankings brought out by BusinessWeekUS News and World Reportetc., were mainly looking at MBA programs or undergraduate programs and they did not take into account research. Financial Times did, but their results were not really available in a transparent way to researchers and academicians. There was a real need and people wanted to be informed about the strength of a school's research.

All the existing rankings are run by independent parties, such as magazines and newspapers. Isn't it a little strange to see a business school jump into the B-school rankings game?

The fact that we compile this doesn't matter as we can't affect the ranking one way or the other. We did it because the magazines weren't doing it. We needed it for our own benchmarking purposes. Then we decided to share it with our colleagues. If we did something that's not credible, people would simply ignore it.

There seems to be no adjustment for the size of the faculty in the ranking. So Wharton, with a total faculty of 459, is first largely because the size of its faculty is so large. The smaller business schools are at a distinct disadvantage. Do you agree that this is a problem?

I wish we were able to provide a version with scaling. In fact, the first year we tried to do scaling. The problem is that many schools did not respond to us. Some provided data that was not correct. We ended up using numbers that we determined from Web pages. We ended up with lots of complaints about the accuracy of data. This is why we decided not to scale. Fortunately, we provide the full data so if a school wants to benchmark against a finite number of schools, they can do the scaling themselves.

Who uses this ranking and what kind of feedback have you gotten so far?

The faculty of B-schools, deans and potential PhD students are using it. If I am interviewing a young faculty member, I can use this to get information on his advisors -- how good they are, how many publications they have, etc.

We have gotten overwhelmingly positive feedback on the rankings. This ranking has been in existence for about five years now and it is used worldwide. We have got some criticism too. Generally it is about why we haven't included a particular journal or we have a journal which someone thinks is not good.

So how do you choose the journals on the basis of which you rank research?

These are really the top set of journals. There may be a journal that is outstanding but it's not in this list. That is generally because those journals are either applied psychology journals or economics journals, or journals that the B-school faculty publish in but they are not mainline B-school journals.

Your database does not include popular management journals like Harvard Business Review or Sloan Management Review. How come?

That's because we want scientific and academic journals as opposed to those that publish philosophical pieces.

The FT, which also does a B-school research ranking, uses a pool of 45 academic and practitioner journals. BusinessWeek measures research output in 20 journals. Your ranking measures 24. How different is your ranking from theirs?

Our results are highly correlated with their results. The shortcoming of the FT ranking is that they only report results for the schools that apply for the ranking. When they have joint degree programs between two universities, how they calculate [the scores] is very murky. Their purpose is really to rank the degree program and not research productivity.

Publishing a research paper can take years. If a professor moves before the paper is published, who gets the credit?

Let's say I worked on an article while I was here at University of Texas at Dallas. The next year I am at the University of Washington and the paper gets accepted and is published. University of Washington will get the credit.

But isn't there a bias there as the wrong university gets the credit?

There is no bias because people move all the time and chances are when that faculty moves away, some other faculty moves in their place. There is no way you can capture what percentage of the article was written in one school versus the other school. This we believe is the most accurate because the faculty member belongs to that school at the time the article appears.

When you look at schools outside North America and Europe, do you factor in regional peer-reviewed research journals? After all, good B-schools in Asia publish a significant amount of research in regional research journals as well as global ones and it would be wrong to ignore that.

I have talked to Asian B-schools and they recognize that these are the top journals in the world. It is true that a lot of their publications have come out in journals that are U.S.-based. That's because business education is dominated by the U.S.

A school might be prolific in its research output but another one might be publishing less research but having greater impact. C.K. Prahalad came up with a winning idea that had widespread impact for a decade. But at the end of the day, it was just one article.

Here's the beauty of research. No one really knows what the impact will be next year or five years from now. My colleagues try to measure research impact as short-term impact. Our position is that we are measuring output in research journals. Each article goes through a process of refereeing by the leading academics of our time.

More from Poets&Quants:

Join the Conversation
Current Issue
  • Give the gift of Fortune
  • Get the Fortune app
  • Subscribe
Powered by WordPress.com VIP.