How to Improve the Polling Industry
Interpretation of public opinion matters as much as methodology
I’ve participated in numerous post-election roundtables and debriefing sessions about the state of the American polling industry and public opinion analysis overall, a field I’ve worked in for more than 20 years in both an academic and practitioner capacity. With good reason, most of these discussions center on issues of survey methodology—primarily, if and how traditional phone-based and emerging Internet-based polls are failing to adequately represent the people that actually turn out for elections.
There’s no question that the field badly underestimated Trump support in 2020, as in 2016, while also missing the mark entirely in several state-level elections like the U.S. Senate contest in Maine, where nearly every pre-election poll predicted Democratic challenger Sara Gideon would win narrowly when in reality incumbent Republican Susan Collins was easily re-elected by 9 points. The consequences of bad polling are real, as these tools are used to make larger strategic plans and to determine where and how best to dedicate resources to increase the likelihood of success. If these surveys can’t adequately measure the opinions of representative samples of people in order to make wider claims about a population at large, nationally or in a state or congressional district, then they are essentially useless—pseudoscience dressed up in statistical language with no connection to underlying reality.
But since the best pollsters understand these limitations, and don’t make overly deterministic claims about polling, many set out to find and fix the problems with survey methodology. After 2016, many smart researchers looked into the nature of “weighting” of polls by education to ensure that the views of particular groups, particularly those with a high school education or less, are properly considered in a poll. Things improved in 2020 but clearly weighting alone didn’t solve all the problems. Polling also works on the assumption that there is no correlation between participants and non-participants in surveys. If this assumption is flawed—meaning for example that Trump-leaning people are systematically less inclined to take surveys than Biden-leaning ones—polls will have real problems.
After 2020, data analyst David Shor looked into this exact issue offering an intriguing explanation of why polls didn’t include representative numbers of Trump voters—mainly that Trump-leaning voters don’t trust institutions like the media or non-profit organizations who sponsor most polls and therefore don’t even participate in surveys. Shor proffered that higher-trust Democrats were much more likely to take surveys during the pandemic lockdowns than were low-trust Republicans, thus skewing the samples of many polls. President Trump’s frequent admonitions against “fake news” and “fake polls” certainly fueled existing doubts among many of these voters. However, weighting by education alone does not solve the issue Shor raises since samples of non-college educated whites that don’t include enough low-trust, Trump-leaning cohorts will remain flawed.
As analysts and industry leaders work on fixes and corrections to ensure solid samples and better surveys, it’s worth considering another source of problems in the polling world—primarily, too many polls released today are basically propaganda and don’t offer any real understanding of how voters process dense social and economic topics or how they absorb competing understandings of what specific ideas and issues actually mean.
If you don’t get a representative sample of people in your poll, your poll will be bad. If you ask poor or misleading questions—or spin the narrow margins of the responses to these questions as showing “strong support” for your issue—your poll will also be bad.
Unfortunately, people spend less time on these problems of design and interpretation in polling than they do on the technical aspects of methodology.
Here are a few suggestions on how to improve the interpretation side of things:
Focus more on measuring basic attitudes and beliefs and less on messaging. Too many polls today overlook the basic need to understand the complexities of Americans’ opinions given the limited information available to most voters. Likewise, many issue polling projects favor momentary glimpses of people’s views rather than examining how people’s attitudes evolve over time in ways that paint a clearer picture of the overall context of opinion formation in politics.
Large chunks of many issue polls are reserved for “messaging”—mostly verbose, jargon rich paragraphs in polls that seek to “frame” an issue but often just confuse poll participants and produce slight majority support that advocates then run with as if the public is fully behind them.
Instead, researchers should design more opinion projects that measure basic values and attitudes; assess relative priorities and partisan understandings of issues; and explore how these larger determinants of opinion influence people’s support or opposition for specific issues laid out in plain English—not abstract group-speak.
Use qualitative work to flesh out the meaning of poll numbers. Hard core “quants” hate qualitative research. And it’s true that in-depth interviews, ethnographic diaries, focus groups, and other qualitative methods do not represent what subgroups of people actually think and no researcher should make those claims.
But these are highly useful tools in finding out if people even have a clue of what most advocates are talking about. Supplementing quantitative surveys with this kind of research adds to understanding of voter attitudes and provides more depth than polling questions alone.
For example, I participated in a fascinating project a few years ago around the issue of a federal jobs guarantee, an idea that gained strength in progressive circles throughout the Trump presidency. In recent years up to today, public polling shows that roughly 65 percent to 70 percent of Americans support the idea of a federal jobs program. But when our research group set out across rural and urban areas in the South, and in old manufacturing areas of the upper Midwest, the response we got at the time to a federal jobs guarantee idea was less than unanimous.
In general, people in the discussion groups we held on the issue (again not representative of all people in those groups) liked the idea of everyone having a job and supported an increased role for government in securing a job for people. Although I can’t say that X group thinks Y about the issue based on these discussions, I do have a better flavor for how voters in different contexts understand the idea of a federal jobs guarantee and what it would take for them to back such a proposal.
Regardless of location, the people we talked with had tons of questions about what a federal jobs guarantee idea meant in practice and how it would apply in their region. For example, people in the Delta wondered how they could even keep smart young people or professionals like teachers or doctors in their area without first expanding basic infrastructure—hospitals, community colleges, transportation, and basic utilities including sewage, water, and broadband. People in New Orleans, in turn, wondered how a jobs guarantee would address the issue of outside labor coming in and taking good paying union jobs in the aftermath of Hurricane Katrina. People in northeast Ohio said they had lots of job opportunities in their area but what they really needed were paid apprenticeships to get people into new lines of work and access to loans for small businesses to start up and grow.
By the end of the process, our research group’s understanding of a federal jobs guarantee shifted to the following construct which seemed to generate consensus as a good approach when explored in discussions:
In order to address the crisis of employment and the need for good paying jobs, it is time for the federal government to design and invest in a large-scale project to guarantee that every American who wants to work can find a job.
Like the Works Progress Administration (WPA) during the New Deal, which put millions of Americans to work fixing their communities and building new infrastructure, this new project would make sure that all Americans enjoy the dignity of work and have the chance to support themselves and their communities.
The program would seek to increase work in three ways:
· First, it would seek to connect out-of-work or underemployed people to existing jobs where possible.
· Second, it would pay out-of-work or underemployed people wages to get specific training to fill an existing or new job.
· Third, for those who may not fit into these other categories, like older workers or those who need only part-time work, the government will put people directly to work doing meaningful things in their communities like new construction projects or caregiving.
In addition to the job guarantee, this new program will increase the availability of small business loans for entrepreneurs and create new works councils that would bargain for better wages for workers across the entire sector of an industry and ensure that workers are treated fairly.
As the Biden administration considers ways to put people back to work post-COVID, the basic structure here—connect people to private sector jobs; offer paid apprenticeships; and reserve actual government guaranteed jobs for certain classes of part-time or elderly workers—could be useful in designing a new jobs programs that would actually meet people’s needs and desires, guaranteed or not.
Challenge ideological assumptions going into any public opinion project. On top of excessive messaging, many projects suffer at the outset from an unwillingness or inability of the sponsors of the work to consider that voters may think differently than them on the issue or have a divergent interpretation of what their issue means and how it should be structured.
The point of good research is to find out political opportunities and challenges on an issue, not to reiterate views or build false self-confidence in ideas that an organization already holds.
This is apparent in a lot of good work on criminal justice reform. For example, research I’ve conducted on these reforms shows that concrete issues of discrimination—such as racial profiling of black men or excessive sentencing guidelines associated with the war on drugs—grounded in values like fairness and equal dignity for all people generate more concern and willingness to act among Americans than do abstract indictments of “systemic racism” across the entire criminal justice sphere.
Shifting public sentiments on BLM protests throughout the past summer also highlight the complexity of opinion about these issues that a purely social justice approach to them might miss. What started with widespread condemnation of police behavior and racial discrimination in the immediate aftermath George Floyd’s death, along with massive public support for criminal justice reforms, rapidly dissolved over the summer as the decentralized movement shifted to more radical protests and indictments of structural racism, wider decriminalization, and issues like reparations or defunding the police that have little to no public support. If movements want to be successful, and pass laws rather than just engage in sloganeering, they need to accept and understand people as they exist, warts and all.
Not surprisingly, most of the early Biden moves on these issues now center on consensus ideas with more public backing built around police training, sentencing fairness, eliminating discrimination, offering second chances, and reducing violence.
Well-designed public opinion analysis is a critical tool for effective politics and issue movements. With major public education efforts coming up on the stimulus and “Build Back Better” agenda, liberals would be wise to both fix their survey methodologies and improve their interpretations of public opinion to get a fair and accurate read of what exactly the public likes and dislikes about their agenda.