Wednesday, June 29, 2011

Stata 12 embraces structural equation models

Stata 12 has just been announced. The software will start shipping by the end of July.  A key new feature introduced in the new version is the module for structural equation models (SEM), a staple tool in marketing, psychology, and several other research disciplines.

LISREL and AMOS have been the two most commonly used software to estimate SEM. Professor John Fox authored an R package, sem, which allows R users to estimate the basic SEM. Professor Fox used Wheaton, Muthén, Alwin, and Summers’s (1977) panel data to illustrate estimation of SEM.  Stata 12 also illustrates SEM using the same data set.

Stata 12, according to Stata's website, supports the following in SEM:
  • Use GUI or command language to specify model.
  • Standardized and unstandardized results.
  • Direct and indirect effects.
  • Goodness-of-fit statistics.
  • Tests for omitted paths and tests of model simplification including modification indices, score tests, and Wald tests.
  • Predicted values and factor scores.
  • Linear and nonlinear (1) tests of estimated parameters and (2) combinations of estimated parameters with CIs.
  • Estimation across groups is as easy as adding group(sex) to the command. Test for group invariance. Easily add or relax constraints across groups.
  • SEMs may be fitted using raw or summary statistics data.
  • Maximum likelihood (ML) and asymptotic distribution free (ADF) estimation. ADF is also known as generalized method of moments (GMM). Missing at random (MAR) data supported via FIML.
  • Robust estimate of standard errors and standard errors for clustered samples available.
  • Support for survey data including sampling weights, stratification and poststratification, and clustered sampling at one or more levels. 
Professor Fox has made several web-based tutorials forecasting SEM in R, which can be accessed by clicking here.  

Enhanced by Zemanta

Monday, June 27, 2011

45 Journals included in the Financial Times Research Rank

Published: February 17 2009 17:25 | Last updated: June 24 2010 14:29
The list below details the 45 journals used by the Financial Times in compiling the Business School research rank, included in both the Global MBA and EMBA rankings.
1. Academy of Management Journal (Academy of Management, Ada, Ohio)
2. Academy of Management Perspectives (AMP)
3. Academy of Management Review (Academy of Management)
4. Accounting, Organisations and Society (Elsevier)
5. Accounting Review (American Accounting Association)
6. Administrative Science Quarterly (Cornell University)
7. American Economic Review (American Economic Association, Nashville)
8. California Management Review (UC Berkely)
9. Contemporary Accounting Research (Wiley)
10. Econometrica (Econometric Society, University of Chicago)
11. Entrepreneurship Theory and Practice (Baylor University, Waco, Texas)
12. Harvard Business Review (Harvard Business School Publishing)
13. Human Resource Management (John Wiley and Sons)
14. Information Systems Research (Informs)
15. Journal of Accounting and Economics (Elsevier)
16. Journal of Accounting Research (University of Chicago)
17. Journal of Applied Psychology (American Psychological Association)
18. Journal of Business Ethics (Kluwer Academic)
19. Journal of Business Venturing (Elsevier)
20. Journal of Consumer Psychology (Elsevier)
21. Journal of Consumer Research (University of Chicago)
22. Journal of Finance (Blackwell)
23. Journal of Financial and Quantitative Analysis
24. Journal of Financial Economics (Elsevier)
25. Journal of International Business Studies (Academy of International Business)
26. Journal of Management Studies (Wiley)
27 Journal of Marketing (American Marketing Association)
28. Journal of Marketing Research (American Marketing Association)
29. Journal of Operations Management (Elsevier)
30. Journal of Political Economy (University of Chicago)
31. Journal of the American Statistical Association (American Statistical Association)
32. Management Science (Informs)
33. Marketing Science (Informs)
34. MIS Quarterly (Management Information Systems Research Centre, Unviersity of Minnesota)
35. Operations Research (Informs)
36. Organization Science (Informs)
37. Organization Studies (SAGE)
38. Organizational Behaviour and Human Decision Processes (Academic Press)
39. Production and Operations Management (POMS)
40. Quarterly Journal of Economics (MIT)
41. Rand Journal of Economics (The Rand Corporation)
42. Review of Accounting Studies (Springer)
43. Review of Financial Studies (Oxford University Press)
44. Sloan Management Review (MIT)
45. Strategic Management Journal (John Wiley and Sons)

FT.com / UK - 45 Journals used in FT Research Rank
Enhanced by Zemanta

Latest in research from the world bank

World map showing countries by nominal GDP per...Image via WikipediaHere are the new studies on development economics from the world bank.

WPS5660. On the Relevance of Freedom and Entitlement in Development: New Empirical Evidence (1975-2007) by Jean-Pierre Chauffour
WPS5661. Economic Performance under NAFTA: A Firm-Level Analysis of the Trade-Productivity Linkages by Rafael E. De and Hoyos Leonardo Iacovone
WPS5662. Deep Trade Policy Options for Armenia: The Importance of Services, Trade Facilitation and Standards Liberalization by Jesper Jensen and David G. Tarr
WPS5663. Cotton Subsidies, the WTO, and the ‘Cotton Problem’ by John Baffes
WPS5664. Mobile Banking and Financial Inclusion: The Regulatory Lessons by Michael Klein and Colin Mayer
WPS5665. Measuring the Impacts of Global Trade Reform with Optimal Aggregators of Distortions by David Laborde, Will Martin, and Dominique van der Mensbrugghe
WPS5666. Fiscal Policy and Debt Dynamics in Developing Countries by Ethan Ilzetzki
WPS5667. Incentive Compatible Reforms: the Political Economy of Public Investments in Mongolia by Zahid Hasnain
WPS5668. Eight Questions about Brain Drain by John Gibson and David McKenzie
WPS5669. Does Cash for School Influence Young Women’s Behavior in the Longer Term? Evidence from Pakistan by Andaleeb Alam,Javier E. Baez, and Ximena V. Del Carpio
WPS5670. Financial Liberalization and Allocative Efficiency of Capital by Madina Kukenova
WPS5671. Reliability of Recall in Agricultural Data by Kathleen Beegle, Calogero Carletto, and Kristen Himelein
WPS5672. Biofuels and Climate Change Mitigation: A CGE Analysis Incorporating by Govinda R. Timilsina and Simon Mevel
WPS5673. World Oil Price and Biofuels: A General Equilibrium Analysis by Govinda R. Timilsina, Simon Mevel, and Ashish Shrestha
WPS5674. Students Today, Teachers Tomorrow? Identifying Constraints on the Provision of Education by Tahir Andrabi, Jishnu Das, and Asim Ijaz Khwaja
WPS5675. Strategic Climate Policy with Offsets and Incomplete Abatement: Carbon Taxes Versus Cap-and-Trade by Jon Strand
WPS5676. Collective Action, Political Parties and Pro-Development Public Policy by Philip Keefer
WPS5677. International Harmonization of Product Standards and Firm Heterogeneity in International Trade by José-Daniel Reyes
WPS5678. Under What Conditions Does a Carbon Tax on Fossil Fuels Stimulate Biofuels? by Govinda R. Timilsina, Stefan Csordás, and Simon Mevel
WPS5679. Implications of the Doha Market Access Proposals for Developing Countries by David Laborde, Will Martin, and Dominique van der Mensbrugghe
WPS5680. Schooling and Youth Mortality: Learning from a Mass Military Exemptionand by Piero Cipollone and Alfonso Rosolia
WPS5681. Assessing the Long-term Effects of Conditional Cash Transfers on Human Capital: Evidence from Colombia by Javier E. Baez and Adriana Camacho
WPS5682. Is infrastructure capital productive? A dynamic heterogeneous approach by César Calderón, Enrique Moral-Benito, and Luis Servén
WPS5683. Small Area Estimation-Based Prediction Methods to Track Poverty: Validation and Applications by Luc Christiaensen by Peter Lanjouw, Jill Luoto, and David Stifel
WPS5684. Adjusting the Labor Supply to Mitigate Violent Shocks: Evidence from  Rural Colombia by Manuel Fernández, Ana María Ibáñez, and Ximena Peña
WPS5685. A profile of border protection in Egypt: An Effective Rate of Protection approach adjusting for energy subsidies by Alberto Valdés and William Foster
WPS5686. Nigeria’s Infrastructure: A Continental Perspective by Vivien Foster and Nataliya Pushak
WPS5687. Cape Verde’s Infrastructure: A Continental Perspective by Cecilia M. Briceño-Garmendia and Daniel Alberto Benitez
Enhanced by Zemanta

Why Science Struggles to Correct Its Mistakes - NYTimes.com

Journal of Personality and Social PsychologyImage via WikipediaWhy Science Struggles to Correct Its Mistakes - NYTimes.com

It’s Science, but Not Necessarily Right
By CARL ZIMMER

ONE of the great strengths of science is that it can fix its own mistakes. “There are many hypotheses in science which are wrong,” the astrophysicist Carl Sagan once said. “That’s perfectly all right: it’s the aperture to finding out what’s right. Science is a self-correcting process.”

If only it were that simple. Scientists can certainly point with pride to many self-corrections, but science is not like an iPhone; it does not instantly auto-correct. As a series of controversies over the past few months have demonstrated, science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan’s words would suggest. Science runs forward better than it does backward.

Why? One simple answer is that it takes a lot of time to look back over other scientists’ work and replicate their experiments. Scientists are busy people, scrambling to get grants and tenure. As a result, papers that attract harsh criticism may nonetheless escape the careful scrutiny required if they are to be refuted.

In May, for instance, the journal Science published eight critiques of a controversial paper that it had run in December. In the paper, a team of scientists described a species of bacteria that seemed to defy the known rules of biology by using arsenic instead of phosphorus to build its DNA. Chemists and microbiologists roundly condemned the paper; in the eight critiques, researchers attacked the study for using sloppy techniques and failing to rule out more plausible alternatives.

But none of those critics had actually tried to replicate the initial results. That would take months of research: getting the bacteria from the original team of scientists, rearing them, setting up the experiment, gathering results and interpreting them. Many scientists are leery of spending so much time on what they consider a foregone conclusion, and graduate students are reluctant because they want their first experiments to make a big splash, not confirm what everyone already suspects.

“I’ve got my own science to do,” John Helmann, a microbiologist at Cornell and a critic of the Science paper, told Nature. The most persistent critic, Rosie Redfield, a microbiologist at the University of British Columbia, announced this month on her blog that she would try to replicate the original results — but only the most basic ones, and only for the sake of science’s public reputation. “Scientifically I think trying to replicate the claimed results is a waste of time,” she wrote in an e-mail.

For now, the original paper has not been retracted; the results still stand.

Even when scientists rerun an experiment, and even when they find that the original result is flawed, they still may have trouble getting their paper published. The reason is surprisingly mundane: journal editors typically prefer to publish groundbreaking new research, not dutiful replications.

In March, for instance, Daryl Bem, a psychologist at Cornell University, shocked his colleagues by publishing a paper in a leading scientific journal, The Journal of Personality and Social Psychology, in which he presented the results of experiments showing, he claimed, that people’s minds could be influenced by events in the future, as if they were clairvoyant.

Three teams of scientists promptly tried to replicate his results. All three teams failed. All three teams wrote up their results and submitted them to The Journal of Personality and Social Psychology. And all three teams were rejected — but not because their results were flawed. As the journal’s editor, Eliot Smith, explained to The Psychologist, a British publication, the journal has a longstanding policy of not publishing replication studies. “This policy is not new and is not unique to this journal,” he said.

As a result, the original study stands.

Even when follow-up studies manage to see the light of day, they still don’t necessarily bring matters to a close. Sometimes the original authors will declare the follow-up studies to be flawed and refuse to retract their paper. Such a standoff is now taking place over a controversial claim that chronic fatigue syndrome is caused by a virus. In October 2009, the virologist Judy Mikovits and colleagues reported in Science that people with chronic fatigue syndrome had high levels of a virus called XMRV. They suggested that XMRV might be the cause of the disorder.

Several other teams have since tried — and failed — to find XMRV in people with chronic fatigue syndrome. As they’ve published their studies over the past year, skepticism has grown. The editors of Science asked the authors of the XMRV study to retract their paper. But the scientists refused; Ms. Mikovits declared that a retraction would be “premature.” The editors have since published an “editorial expression of concern.”

Once again, the result still stands.

But perhaps not forever. Ian Lipkin, a virologist at Columbia University who is renowned in scientific circles for discovering new viruses behind mysterious outbreaks, is also known for doing what he calls “de-discovery”: intensely scrutinizing controversial claims about diseases.

Last September, Mr. Lipkin laid out several tips for effective de-discovery in the journal Microbiology and Molecular Biology Reviews. He recommended engaging other scientists — including those who published the original findings — as well as any relevant advocacy groups (like those for people suffering from the disease in question). Together, everyone must agree on a rigorous series of steps for the experiment. Each laboratory then carries out the same test, and then all the results are gathered together.

At the request of the National Institutes of Health, Mr. Lipkin is running just such a project with Ms. Mikovits and other researchers to test the link between viruses and chronic fatigue, based on a large-scale study of 300 subjects. He expects results by the end of this year.

This sort of study, however, is the exception rather than the rule. If the scientific community put more value on replication — by setting aside time, money and journal space — science would do a better job of living up to Carl Sagan’s words.

Carl Zimmer writes frequently for The New York Times about science and is the author, most recently, of “A Planet of Viruses.”
Enhanced by Zemanta

Sunday, June 5, 2011

Census: Costly but necessarry

From the Economist:
Censuses: Costing the count | The Economist

Costing the count

CENSUSES are as old as civilisation and governments are increasingly following the United Nations benchmark of tallying their people once a decade. Keiko Osaki-Tomita of the UN Statistical Division says that a record 70 territories are holding censuses in 2011. Only Iraq, Lebanon, Myanmar, Somalia, Uzbekistan and Western Sahara will fail to hold a count in this ten-year round.

The big growth in censuses is in Africa, but changes in the way they happen are coming from Europe. Fears of Germany’s anti-census lobby (which liked to chant “Only sheep let themselves be counted”) blocked counts for 24 years after 1987—a gap exceeded only by places such as Angola. Change came with a new European Union law requiring decennial headcounts. But the 80,000 census takers who were at work in May did not depend only on the house visits that privacy-conscious Germans find so troubling. Instead they culled data from national employment records and local population registers. Only around 10% of citizens were randomly selected for old-style surveys (with a stiff fine for those refusing to respond).

During this census round (which ends in 2014), 17 European countries will use government databases in some way. In nine, government-held data will be the only source of information.

Door-knocking is not just unpopular. It is costly. Finland, one of the first countries to ditch its shoe-leather census, saw expenses fall by more than 90% between 1980 and 1990, and now completes its counts for around €1m ($1.44m) for 5.3m people. America’s $13 billion census last year cost around $42 per head. Door-stepping is dangerous too. 15 census officials were killed during South Africa’s count in 2001.

Pete Benton, deputy director of the British census, says that using databases means that counts can be more frequent. That saves money too—gearing up once every ten years is expensive. And the long gap between traditional censuses is a growing problem as people become more mobile and households more flexible. Britain aims to decide by 2014 if it will bin the old methods for the 2021 census.

New approaches are harder for countries that do not keep a central population register. Continental European countries like Germany impose a legal duty on citizens to register their residence with the local authority. Others, like Britain or America, keep information in quite separate databases—dealing with, say tax, pensions, elections, passports, driving licences and health care. Some countries label their citizens with single identifying numbers. Elsewhere, that’s loathed. Even in well run Finland, the census-takers have to combine information from several dozen different data sets; efforts to link them started fully 50 years ago. High levels of immigration can also make government data unreliable, says Kevin Deardorff, of the American census office.

Modern-minded American statisticians also face thorny legal questions: the constitution calls for an “actual enumeration” of the population. That may make newfangled census methods vulnerable to challenges from the courts. A Supreme Court ruling already limits the use of statistical sampling, which adjusts survey data to include more accurately minorities, who are generally undercounted by older methods. In December the Government Accountability Office noted that the census’s cost has on average doubled each decade since 1970. Without “fundamental reforms”, the next one could cost $30 billion.

Yet recent censuses in Asia have proven that even the most mammoth old-fashioned headcounts can be done cheaply. In February India’s census authority completed a year-long effort of tallying and biometrically registering its 1.2 billion residents. The census part cost just over 40 US cents per head. In November China’s ten-day survey, carried out by 10m census workers, counted 1.34 billion people for what it said was the equivalent of $1 a time.

Governments rarely reckon that they need to know less. Indeed, says Mr Benton, “Sometimes the more information you have the more you want.” But how best to collect it? More data please.