Course Logistics

The course will take place between January 28 - February 1, 2019 in Room 105 in the Faculdade de Filosofia, Letras e Ciências Humanas (FFLCH) from 9am - 6pm. You do not require your own laptop - where we will use computers for the replication exercises we will use a dedicated laboratory with the necessary software pre-installed.

Participants should have a basic understanding of research design and quantitative methods techniques. IPSA courses that meet this requirement are “Designing Feasible Research Projects in Political Science”, “Basics of Quantitative Methods for Public Policy Analysis”, “Advanced Issues in Quantitative Methods for Public Policy Analysis”, “Advanced Research Design in Political Science: From Modelling to Manuscript” or “Basics of Multi-Method Research: Integrating Case Studies and Regression”.

Please send any questions to jphillips@usp.br.

Course Objectives

This course will give students the tools and confidence to understand, deconstruct and critique political science research papers. By encouraging participants to ground critiques of both quantitative and qualitative research in the framework and language of causation, the course hones vital skills for identifying hidden assumptions, weighing the strength of evidence and suggesting alternative explanations. The course also underlines the importance of making these critiques constructive by suggesting alternative research designs and a wide range of robustness checks. By the end of the course, participants will be confident contributing to peer review processes as colleagues, seminar participants or as journal referees, and will also gain new perspectives on how to design and execute their own research.

The teaching approach aims to systematize the types of critique we can make so that participants are able to provide multiple reasons why the account offered by an author might not be valid. While the course covers critiques of measurement, theory and modeling, we focus particularly on critiques of causation, including risks of selection, confounding and reverse causation, demystifying terms such as ‘counterfactual’, ‘complier’ and ‘external validity’. In turn, we consider how to make critiques constructive – first, in the way they are communicated, and, second, in identifying positive research strategies that can overcome or mitigate common critiques, for example alternative research designs and robustness tests.

We will use the afternoon lab sessions to practice formulating effective and constructive critiques. Building on examples drawn from a wide range of papers across the fields of political science and international relations, participants will develop and compare critiques in a range of styles. Participants will also have the option (no obligation or expectation) of sharing their own research ideas and papers to receive feedback from others. The lab sessions will also include the replication of code from one or two published analyses to highlight the range of modelling options researchers are faced with and the breadth of potential critiques that this opens up. The replication exercises will be guided and can be completed in Stata or R.

Pre-course Readings

Because the course will involve intensive discussion and lab sessions, you may find it valuable to read some of the reading materials listed below before the start of the course. The papers that we will be discussing and critiquing during the afternoon lab sessions will be provided to you during the course.

Course Outline

Day 1: Deconstructing an Argument

We discuss what constitutes a convincing argument, the nature of causation, and how a paper can contribute to learning in the discipline. Then we learn to systematically translate the text of a paper into the core elements of a research argument; the units of analysis, the comparisons, the concepts, the measures, the assumptions, the theory, the models, the evidence and the conclusions. Finally, we consider basic critiques of whether the measures reflect the concepts, whether the model captures the theory, and whether the conclusions follow from the premises and evidence.

Readings:

  • King, Gary, Robert Keohane and Sidney Verba, Designing Social Inquiry (Princeton U Press), pp. 3-28
  • Adcock, Robert and David Collier (2001), “Measurement Validity: A Shared Standard for Qualitative and Quantitative Research”, American Political Science Review, 95:3
  • Gerring, John (2005), “Causation: A Unified Framework for the Social Sciences”, Journal of Theoretical Politics, 17:2
  • Hall, Peter, “Aligning Ontology and Methodology,” in James Mahoney and Dietrich Rueschemeyer, eds. Comparative Historical Analysis in the Social Sciences (Cambridge, 2003), pp. 373-404.

Lab Exercise: We practice identifying logical fallacies, and quickly deconstructing the complex arguments of published papers into simple causal statements and causal diagrams.

Day 2: Fundamental Critiques

How can we know that X causes Y? We review the framework of causal inference and why most academic studies cannot prove that X causes Y. We practice making the three fundamental causal critiques: omitted variables, reverse causation and selection bias.

Readings:

  • Barbara Geddes. How the Cases You Choose Affect the Answers You Get: Selection Bias in Comparative Politics. 1990.
  • King, Gary, Robert Keohane and Sidney Verba, Designing Social Inquiry (Princeton U Press), Ch.3
  • Guido W. Imbens and Donald B Rubin. Causal Inference for Statistics, Social, and Biomedical Sciences. 2015. Chapter 1
  • Przeworksi, Adam (2005), “Is the Science of Comparative Politics Possible?”, in Carles Boix and Susan C. Stokes (eds.), Oxford Handbook of Comparative Politics

Lab Exercise: We practice making rapid written and oral critiques of a series of political science studies.

Day 3: Assessing Causal Evidence

We review a range of causal methodology research designs, the assumptions on which they are based and their connection to specific statistical models. We practice repeatedly assessing if these assumptions have been met.

Readings:

  • Alan S. Gerber and Donald P Green. Field Experiments: Design, Analysis and Interpretation. W.W. Norton & Company, 2012. Ch. 1
  • Dunning, Thad (2012), Natural Experiments in the Social Sciences: A Design-Based Approach, Chs. 3, 4, 8 & 9
  • Andrew Eggers, Olle Folke, Anthony Fowler, Jens Hainmueller, Andrew B Hall, and James M Snyder (2013). On the Validity of the Regression Discontinuity Design for Estimating Electoral Effects : New Evidence from Over 40,000 Close Races.

Lab Exercise: We work in teams to critique a paper’s assumptions. We also replicate the empirical analysis of that paper in Stata or R to highlight the assumptions made and the models chosen.

Day 4: How much are we learning?

We look beyond each argument’s own claims to critique the generalizability of the findings, the sensitivity of the findings to the research design, the match between theory and evidence, and support for specific causal mechanisms. We also discuss publication bias, ‘p-hacking’ and pre-registration.

Readings:

  • Cartwright, Nancy (2007), Are RCTs the Gold Standard?, BioSocieties, 2, 11-20
  • Angus S Deaton. Instruments of Development, Randomization in the Tropics and the Search for the Elusive Keps to Economic Development. 2009.
  • Steven D Levitt and John A List. What Do Laboratory Experiments Tell Us About the Real World? 2006.
  • Hainmueller, Jens et al (2014), “Validating vignette and conjoint survey experiments against real-world behavior”, Proceedings of the National Academy of Sciences, 112:8
  • Jasjeet S Sekhon and Rocio Titiunik. When Natural Experiments Are Neither Natural nor Experiments. American Political Science Review, 106(1):35-57, 2012.
  • Dunning, Thad (2012), Natural Experiments in the Social Sciences: A Design-Based Approach, Ch. 10
  • Monogan, James (2015), “Research Preregistration in Political Science: The Case, Counterarguments, and a Response to Critiques”

Lab Exercise: We use our judgment of the relevance and generalizability of study findings to draft policy advice on how governments should respond to a paper’s conclusions. We also conduct sensitivity analysis of the data and findings in key studies using Stata/R.

Day 5: Constructive Critiques

We consider various strategies and techniques for overcoming weaknesses in an argument. These include the use of alternative research designs, complementary qualitative data, deriving multiple tests from theory, uncovering ‘hidden’ units, robustness tests, heterogeneity tests and placebo tests. We also talk about how to frame and articulate critiques in ways which motivate rather than discourage.

Readings:

  • Miller, Beth et al (2017), How To Be a Peer Reviewer: A Guide for Recent and Soon-to-be PhDs, Political Science & Politics
  • Brady, Henry E. (2004), Data-Set Observations versus Causal-Process Observations: The 2000 U.S. Presidential Election
  • Kocker, Matthew Adam and Nuno P. Monteiro (2015), What’s in a Line? Natural Experiments and the Line of Demarcation in WWII Occupied France
  • Nunn, Nathan (2011), The Slave Trade and the Origins of Mistrust in Africa, American Economic Review 101 (7) (December): 3221–3252
  • Kosec, Katrina (2011), Politics and Preschool The Political Economy of Investment in Pre-Primary Education

Lab Exercise: Students present a research paper of their own, or a paper allocated by the teacher, making the most convincing causal case possible. Other students then present constructive critiques and suggest improvements. We also practice implementing placebo and heterogeneity tests in Stata/R.