Postdoctoral Researcher
University of Konstanz
evgeniya.nazrullaeva@uni-konstanz.de
CV
I am Postdoctoral Researcher in Comparative Politics at the University of Konstanz. In 2025, I will join the University of Liverpool as a Lecturer (Assistant Professor) in Politics.
Previously, I was a Postdoctoral Fellow in the School of Public Policy at the London School of Economics & Political Science. I am also affiliated with the CAGE Research Centre at the University of Warwick and with the project “Democracy under Threat: How Education can Save it” at the University of Glasgow.
I received my PhD in Political Science from UCLA. I hold PhD (Candidate of Science) in Economics from the Higher School of Economics and MA in Economics from the New Economic School.
My research interests are in the areas of political economy and economic history.
Abstract: In recent decades, the personalization of power has been prevalent across authoritarian regimes. Studies have explored how personalization shapes the use of violent repression. We know less about how the concentration of power affects nonviolent strategies of political control that generate voluntary compliance within society. To explore this question we ask whether, in the process of concentrating power, leaders increase the state’s control over education and the media. We combine data on gradations of personalism with novel data on state control of education and the media across 229 authoritarian regimes from 1950 to 2010. We show that the personalization of power does not obliterate nonviolent strategies of control. As personalization increases, the state’s control of education and the media also expands. These findings answer several calls to move beyond the study of repression for understanding the politics of non-democracies and have implications for research on personalism and authoritarian politics.
Abstract: Innovative and resource-demanding data collection is crucial for the advancement of social science research. Many efforts have been made to assemble original datasets and make them publicly available for the benefit of the wider research community. Unfortunately, it is common for researchers to use existing datasets without paying sufficient attention to how they were constructed. To shed light on the advantages, limitations, and implications of different data collection methodologies, and assess how (often seemingly trivial) differences in assumptions or practices influence scores, we take advantage of a unique opportunity to compare three new historical datasets. These three datasets have important similarities that facilitate comparisons, measuring similar aspects of education practices and policies across countries, but they were created using different methods and even seemingly similar measures rely on slightly different assumptions. The EPSM dataset (Education Policies and Systems across Modern History) contains information about the content of de jure school curriculum, teacher training, and other education policies and is based on hand-coding a combination of primary and secondary sources. The HEQ initiative (Historical Education Quality Database) gathers information on similar issues but relies entirely on primary sources such as education laws, regulations, and national curriculum plans. The V-Indoc dataset (Varieties of Indoctrination) relies on country expert assessments of school curriculums, teacher policies, and the presence and nature of political indoctrination. We introduce each dataset and characterize the degree of convergence/divergence between comparable variables along several relevant dimensions.
Abstract: On what basis can we claim a scholarly community understands a phenomenon? Social scientists generally propagate many rival explanations for what they study. How best to discriminate between or aggregate them introduces myriad questions because we lack standard tools that synthesize discrete explanations. In this paper, we assemble and test a set of approaches to the selection and aggregation of predictive statistical models representing different social scientific explanations for a single outcome: original crowd-sourced predictive models of COVID-19 mortality. We evaluate social scientists' ability to select or discriminate between these models using an expert forecast elicitation exercise. We provide a framework for aggregating discrete explanations, including using an ensemble algorithm (model stacking). Although the best models outperform benchmark machine learning models, experts are generally unable to identify models' predictive accuracy. Findings support the use of algorithmic approaches for the aggregation of social scientific explanations over human judgement or ad-hoc processes.
The invasion of Ukraine has upended Russian education. Washington Post, “The Monkey Cage.” September 14, 2022