The course will take place between January 28 - February 1, 2019 in Room 105 in the Faculdade de Filosofia, Letras e Ciências Humanas (FFLCH) from 9am - 6pm. You do not require your own laptop - where we will use computers for the replication exercises we will use a dedicated laboratory with the necessary software pre-installed.
Participants should have a basic understanding of research design and quantitative methods techniques. IPSA courses that meet this requirement are “Designing Feasible Research Projects in Political Science”, “Basics of Quantitative Methods for Public Policy Analysis”, “Advanced Issues in Quantitative Methods for Public Policy Analysis”, “Advanced Research Design in Political Science: From Modelling to Manuscript” or “Basics of Multi-Method Research: Integrating Case Studies and Regression”.
Please send any questions to jphillips@usp.br.
This course will give students the tools and confidence to understand, deconstruct and critique political science research papers. By encouraging participants to ground critiques of both quantitative and qualitative research in the framework and language of causation, the course hones vital skills for identifying hidden assumptions, weighing the strength of evidence and suggesting alternative explanations. The course also underlines the importance of making these critiques constructive by suggesting alternative research designs and a wide range of robustness checks. By the end of the course, participants will be confident contributing to peer review processes as colleagues, seminar participants or as journal referees, and will also gain new perspectives on how to design and execute their own research.
The teaching approach aims to systematize the types of critique we can make so that participants are able to provide multiple reasons why the account offered by an author might not be valid. While the course covers critiques of measurement, theory and modeling, we focus particularly on critiques of causation, including risks of selection, confounding and reverse causation, demystifying terms such as ‘counterfactual’, ‘complier’ and ‘external validity’. In turn, we consider how to make critiques constructive – first, in the way they are communicated, and, second, in identifying positive research strategies that can overcome or mitigate common critiques, for example alternative research designs and robustness tests.
We will use the afternoon lab sessions to practice formulating effective and constructive critiques. Building on examples drawn from a wide range of papers across the fields of political science and international relations, participants will develop and compare critiques in a range of styles. Participants will also have the option (no obligation or expectation) of sharing their own research ideas and papers to receive feedback from others. The lab sessions will also include the replication of code from one or two published analyses to highlight the range of modelling options researchers are faced with and the breadth of potential critiques that this opens up. The replication exercises will be guided and can be completed in Stata or R.
Because the course will involve intensive discussion and lab sessions, you may find it valuable to read some of the reading materials listed below before the start of the course. The papers that we will be discussing and critiquing during the afternoon lab sessions will be provided to you during the course.
We discuss what constitutes a convincing argument, the nature of causation, and how a paper can contribute to learning in the discipline. Then we learn to systematically translate the text of a paper into the core elements of a research argument; the units of analysis, the comparisons, the concepts, the measures, the assumptions, the theory, the models, the evidence and the conclusions. Finally, we consider basic critiques of whether the measures reflect the concepts, whether the model captures the theory, and whether the conclusions follow from the premises and evidence.
Readings:
Lab Exercise: We practice identifying logical fallacies, and quickly deconstructing the complex arguments of published papers into simple causal statements and causal diagrams.
How can we know that X causes Y? We review the framework of causal inference and why most academic studies cannot prove that X causes Y. We practice making the three fundamental causal critiques: omitted variables, reverse causation and selection bias.
Readings:
Lab Exercise: We practice making rapid written and oral critiques of a series of political science studies.
We review a range of causal methodology research designs, the assumptions on which they are based and their connection to specific statistical models. We practice repeatedly assessing if these assumptions have been met.
Readings:
Lab Exercise: We work in teams to critique a paper’s assumptions. We also replicate the empirical analysis of that paper in Stata or R to highlight the assumptions made and the models chosen.
We look beyond each argument’s own claims to critique the generalizability of the findings, the sensitivity of the findings to the research design, the match between theory and evidence, and support for specific causal mechanisms. We also discuss publication bias, ‘p-hacking’ and pre-registration.
Readings:
Lab Exercise: We use our judgment of the relevance and generalizability of study findings to draft policy advice on how governments should respond to a paper’s conclusions. We also conduct sensitivity analysis of the data and findings in key studies using Stata/R.
We consider various strategies and techniques for overcoming weaknesses in an argument. These include the use of alternative research designs, complementary qualitative data, deriving multiple tests from theory, uncovering ‘hidden’ units, robustness tests, heterogeneity tests and placebo tests. We also talk about how to frame and articulate critiques in ways which motivate rather than discourage.
Readings:
Lab Exercise: Students present a research paper of their own, or a paper allocated by the teacher, making the most convincing causal case possible. Other students then present constructive critiques and suggest improvements. We also practice implementing placebo and heterogeneity tests in Stata/R.