PotP# 14: How much are corporations being blamed for inflation? And can updated data erode old results?
Plus, new data from the CDC looking at youth health, a look at whether 7 point scales are sufficient for some survey tasks, and more!
Hello friends—Happy Monday. Welcome to Pulse of the Polis 14.
We’ve heard for a long time that there is a growth in “independent” voters over the last few decades. (At least if you count folks who “lean” consistently towards one party over the other as “independents”—and I have strong feelings about this). But one thing we haven’t heard much about is the growth in strong partisans that we’ve experienced in the last 15 years. I’ve written about this in this Medium blog post here. It was also the first time I played around with a pretty fun R package geared towards replicable results1.
It was a lot of fun making this one! Let’s get into some other cool social science work!
I’ve got 6 projects for you this week:
A survey looking to see the extent to which people are blaming companies for inflation
A really impressive paper looking at various kinds of political moderates
A project showing how the vintage of one’s data can affect their results
And some nuggets that happened to catch my eye
Voters See Maximized Corporate Profits as a Primary Driver of Inflation | Fielded: Apr. 12-14 | Morning Consult
High inflation sucks and nobody likes it2. But it, unfortunately, is a stubborn, multifaceted problem. What do people think is the cause? This Morning Consult survey of 2,000 registered voters suggests that many are blaming company's profit motivations. 35% believe that companies' contributions to inflation are primarily driven by the desire to maximize profits. 22% cite supply chain issues (a decrease from 32% in June of 2022), 17% see increased labor costs, and 12% believe that companies do not play a role in generating inflation. 65% of Americans believe (correctly) that corporate profits have increased over the last 3 years.
I'd love to know what the partisan breakdown is on this—and what other factors apart from companies Americans are attributing inflation to and how influential companies are relative to the overall mix. I'd also love to see the core question in a "select all that apply" style. I think a lot of folks probably believe that there are multiple motivations behind higher prices/profits and I'd love to see them explored.
Overview and Methods for the Youth Risk Behavior Surveillance System — United States, 2021 | Report Published: Apr. 2023 | CDC
Since 1991, the CDC has administered the Youth Risk Behavior system, a survey instrument fielded to a stratified random sample of high school and private school high schoolers in the US public. The 2021 data, collected among 17,000 students in the midst of the Covid-19 pandemic, has been released. Among the insights found in the data: Nearly one in four high school students identified as LGBTQ+; about 60% of students overall reported feeling connected to others; 3% experienced housing instability with some groups (such as LGBTQ+ youth) being more likely to experience housing instability; and 20% of students witnessed community violence with 3.5% admitting to carrying a firearm.
There are a bunch of really interesting data points in here, including questions on teen mental health. National and site-level data for this wave and all those going back to 1991 are freely available. In many respects, the patterns seen within the survey either continue or accelerate patterns existing prior to the pandemic. Which is to say, e.g., teens are smoking/drinking less, having less sex, but are more likely to be struggling with mental health issues.
BofA Survey: 76% of Small Businesses Feel Well-Equipped to Survive a Recession | Released: Apr. 27, 2023 | IPSOS
Bank of America paired with Ipsos to field a survey of just over 1,100 small business owners (employing 2-99 individuals and making between 100k and 5 million a year) across the US and in numerous key markets (results were weighted towards government estimates). According to the report, small business owners report confidence in their future revenue growth (65% expect increased revenue) versus in the growth of their local economy (43% expect improvement) or the national economy (34%). Inflation, a recession, commodities prices, and the US' political state reflect the top concerns of surveyed business owners, with 79%, 72%, 68%, and 68%, respectively, saying it is a concern. Nearly 80% of businesses have raised prices; businesses report an average price hike of just shy of 10%. Most small business owners (78%) are "committed to sustainable business practices." Additionally about half of firms (49%) say that they are leaning into AI—but the vast majority are doing so in ways that augment existing labor. Only 27% of small businesses claim that they plan to replace existing labor with AI.
There are two things that I want to highlight here. First is that this survey is in conversation with the earlier Morning Consult survey. While most of the public sees inflation as being caused by greed, it seems like many small business owners are raising prices in response to things such as continued supply chain issues as well as in an effort to combat increased material and labor costs. That said, it's interesting that the average is 10%, that's a bit higher than year over year inflation3. Obviously the basket of goods are different but I'd hazard that there's at least a few organizations raising prices more than they have to—either to try and get ahead of inflation or to maintain favorable funding circumstances. Second, I hope the AI findings puts a damper on people's concerns about AI taking all our jerbs. Current AI technology is likely to be used to increase efficiency of current staff rather than replace folks. One would hope that this would mean more sustainable work overall but I doubt that's how it'll end up—but who knows! I hope I'm proven wrong4. I also doubt that most of the firms who are fully replacing people with AI are going to be satisfied with their choice. I pray that these folks find gainful employment soon; likely they are dodging a massive bullet.
Moderates | Published (Physical): May 2023 | American Political Science Review
Political observers use the term “moderate” to mean a lot of things—too much, really. People believe those espousing “moderate” positions to be closeted partisans or to be largely uninformed and/or disengaged from politics. But these characteristics appear to be in conflict with each other! Using a mixture item response theory model5 trained on approximately 280,000 respondents from the 2012-2018 Cooperative Congressional Election Study (CCES), researchers found that they could reliably sort US adults into three camps: 1) Individuals whose policy preferences can be best understood as reflecting a single underlying left-right (“liberal”-”conservative”) dimension (70-75%); 2) Individuals whose policy preferences reflect relatively idiosyncratic mixes of “liberal” and “conservative” positions (20-25%); 3) and people who were largely inattentive (1-5%). This reflects 3 kinds of moderates: Those who genuinely hold positions in between those espoused by the major parties (camp 1), those who are moderate by dint of holding extreme positions on different issues (camp 2), and those who are moderates because their responses suggest that they’re not paying attention (camp 3). The 3 camps harbor different propensities. Even removing camp 2 and 3 from the estimates, the authors show that the majority of Americans are ideologically moderates and that moderates of all stripes are a powerful deciding force in US elections.
This is phenomenal work. First and foremost, we stan one-word paper titles. Second: I know that I’ve been guilty of dismissing moderates on at least a few occasions (or being sloppy with my language such that I conflate moderates with learners—which is simply a different kind of inaccuracy and imprecision on my part!). These findings don't just fit in with the established literature, they were carved to fit the gaps & niches with a freaking laser cutter. I would love to see more work that investigates how these moderate tendencies interact with partisan identity when it comes time to engage in various actions (like voting)—and the kinds of instances where different moderates are likely to defect from voting for the party they lean/identify with. We definitely saw that ‘moderation” on abortion has been a big driver in folks either not-voting or split-ticketing in the last couple of years.
Their measurement strategy identifying moderates is possible with about 20 policy questions. Which isn’t cheap (from a survey perspective) but may be worth the cost if you’re really interested in one type of moderate over another. I’m hoping that future work may be able to bring that battery down to a more manageable number so that it could be more readily used in other survey research questions.
Task Sensitivity and Noise: How Mechanical Properties of Preference Elicitation Tasks Account for Differences in Preferences Across Tasks | Forthcoming | Decision
It’s not uncommon in surveys for researchers to ask how you like two different objects/ideas/policies/concepts and then takes any difference to imply a transitive preference order. For instance: You rate apples a 5, oranges a 5, and pears a 3; researchers will often conclude that you like apples and oranges equally and prefer both over pears. However, this research using simulation and survey experiment demonstrates that it is frequently not appropriate to make relative conclusions based on absolute preference questions. Trying to do so requires us to make assumptions about how respondents’ true preferences map onto the scale; the paper calls out 7 point scales (which happen to be among the longest, regularly-used scales in practice) as too coarse to be able to extrapolate differences in relative preferences. Additionally, rating tasks, where people are asked to simply put the options in the order from most-least preferred tend to be “noisy” (e.g., error prone) especially if the options are relatively close together in true preference. Ratings in those cases are likely more a function of how the question is poised and how the solicitation is structured rather than a reflection of pure, unalloyed preference.
TL;DR: Make survey instruments based around what you actually want to measure! You can't assume that people value two items ranked 3 on a 5 point scale as equal. If you want to measure whether people prefer one option over another, you should ask them that! The art of survey research often requires practitioners to get creative with how they elicit preferences—but if you get too clever, you'll end up with data that only obliquely gets at what you wanted to know! Start your design with what you want to know by the end of the study. Fortunately there are more advanced discrete choice set-ups that are useful in these kinds of situations (e.g., conjoint, MaxDiff)—but the advise about these rankings being noisy when there isn’t a lot of difference between the options holds very, very true.
New data, new results? How data sources and vintages affect the replicability of research | Published (Online): Apr. 19. 2023 | Research & Politics
Many social science papers employing economic data in their analyses rely on sources providing longitudinal records of their variables of interest. However, as Iasmin Goes notes, many of these sources not only make estimates for the current study year6, they also revise previous years' estimates. For example, a nation's GDP growth in one vintage may be rated 1% but 5% in another from the same core data provider. They use three previously published studies to show that many previously published results in social science are contingent upon older data; the relationships do not hold up on the most up-to-date versions are employed.
This article is incredible; it's already changed my behavior when I write my own analyses (as can be seen with the GitHub repository for this Medium post I wrote last week about the rise of Strong Partisans in the US). I also love that there are many feasible reasons why the estimates change: There is, of course, the possibility that the specific numbers were not properly estimated the first time, but, also: changes in reporting standards that shift which cases qualify between waves; political motivations to cast a predecessor as incompetent or crooked; or choices on which competing dataset to source a statistic from7. I think that, when using data we didn't collect ourselves, this paper shows we should be explicitly noting which year/ official release the data are from.
A couple nuggets that caught my eye:
A "survey" of "respondents" by the "Onion" found that Americans would respect Joe Biden more if he shot them. Hearing that Biden has no plan on shooting people, public opinion of Biden dropped to 7%.
About half (49%) of Americans would define themselves as either "very" or "somewhat" artistic according to a YouGov survey of 1,000 Adult Citizens. Younger adults (under 45) generally claim to be more artistic and tend to have higher confidence in their abilities in a variety of artistic pursuits than older adults—though few people claim to be good at many pursuits overall (e.g., 10% believe they'd be good at sculpting).
Echoing findings from this nugget in PotP 13, an NPR/PBS/Marist poll of approximately 1,300 respondents fielded April 13-17 found that 65% did not support the ban of medication-induced abortions—including 55% of Republicans. The survey also broad bipartisan support for term limits on Supreme Court Justices.
That’s all for this week. Stay safe out there. See you next time.
Citation not needed.
8.5% at time of writing.
All bets are off as these capabilities improve though. I for one, welcome our robot overlords.
Effectively an IRT model assuming distinct subpopulations.
Or study window, more broadly—since many projects will take multiple years for data collection
Say if one dataset didn't provide full coverage of the stat, for instance: What supplement did you pick? Because there are often many competing alternatives.