Making Sense of Public Opinion Polls: An Extended Interview with Dr. Dahlia Scheindlin

How should we read and make sense of public opinion polls about American Jews, the general American public, and unfolding events in Israel?

The interview has been edited for coherence and length. An excerpt of this interview ran in the January 25, 2024 issue of the CASJE Research Digest.

Dahlia Scheindlin is a fellow at Century International, based in Tel Aviv. She is a public opinion expert and an international political and strategic consultant. Her book, The Crooked Timber of Democracy in Israel: Promise Unfulfilled, was published in December 2023. Dr. Arielle Levites, CASJE Managing Director, spoke with Dahlia Scheindlin about how she reads public opinion polls and what features she looks for in a study to make sense of the findings.

 


Welcome, Dahlia. Your work is featured all over the place these days: The New YorkerHaaretzThe Guardian, you're very busy, so we really appreciate taking a little time to speak with us.

Dahlia Scheindlin: One of the reasons why I really wanted to spend the time is because I really like the topic.

I appreciate that! CASJE recently launched a new Research Digest where we offer analytic summaries of research about American Jews since October 7th. And the earliest data, the earliest material we have had the opportunity to feature, is largely public opinion polls. So we invited you to help us and our readers make sense of what we're seeing.

Dahlia Scheindlin: Good. I wish everybody would do this. I mean, I love to talk about how to read and how not to read surveys.

Perfect. Those are exactly my questions for you today. It says on your bio that you are a public opinion expert. What does that mean?

Dahlia Scheindlin: Since 1999, I have been conducting surveys and focus group research and putting the results together into a hopefully coherent analysis that helps drive people's decision-making.

The decision-makers could be politicians, political campaigns. They could be NGOs trying to generate social change. The work could be for public use. It could be for the media to understand public opinion dynamics over certain issues. So my work breaks down into a few different types.

The other division in my work is between studies that are intended for internal decision-making for whoever the client is and public research. And I do much more of the former, much more of my work is commissioned for internal decision-making. And sometimes the organizations or sometimes the clients decide to make it public, but it's quite different from most of the media polling that you see, which is all intended for public use.

A lot of what I do publicly these days is analyze other people's research. So that's what I've done in the articles that you've been reading.

What might be special features about how we study and make sense of public opinion? It sounds like a survey is a tool that you use a lot. Is there something special about surveys as tools for generating insight into what public opinion is?

Dahlia Scheindlin: [Surveys are] the only systematic way we have of generalizing at a quantitative level. So everybody can have an observation, everybody can have anecdotes, everybody can try to follow trends and piece together evidence for trends. But I think that very often when anybody, this is in Israel or any other society, when we talk about how society feels, we are oftentimes basing it on some sort of anecdotal observation, including reading the media, including our conversations with our personal colleagues, friends, family, whatever.

If you're a journalist, oftentimes you might pick up conversations with people on the street or elites and policymakers.

But surveys provide something that is not really available anywhere else, which is a quantitative figure that is a hopefully a representative generalization about society, so you can say with some actual grounding [in] evidence that “this percentage of society feels this way.” And I think that by contrast to other kinds of observations, when you say, “Oh, Israelis think this,” or “Americans think that,” the implication of statements like that is that a majority of people think like that. But you don't really have any evidence for that if it's based on anecdotal information, even if that comes from elites.

And certainly if it comes from conversations with people that you know, because all of us know people who are in some ways in our circles, they're not representative. So surveys are intended to provide a representative sample, and provide evidence for whether there's a majority or a minority or a plurality, and trends over time, grounded in evidence.

Can you share a little about the professional norms and ethics that govern this work?

Dahlia Scheindlin: Absolutely. There's good survey research and there's bad survey research. The thing is, there's no one overall international body that says what's good and bad research. Of course there are professional guilds and there are associations of market research companies. And there are well-developed, well-established standards in academia and in commercial research.

Anybody who takes the work seriously knows that there is a strong statistical basis for how you understand sampling methods. You have to try to build a sample to the best of your ability to represent the actual universe you're trying to examine. To do that, you have to know as much as possible about the demographic and geographic and breakdown, sometimes the political breakdown of the universe that you're looking at.

I find it's much easier to do representative survey research with a representative sample of any society when you have data publicly available on basic things like the gender breakdown, the regional breakdown, the age breakdown, the education attainment breakdown, and political leanings. We always have, at least in democratic countries, a measure of political leanings in the actual public, because we have election results and we also know turnout data. So you can get a sample that either mostly reflects the general population, whatever country it is.

And where your sample isn't a hundred percent representative, there are techniques, statistical techniques you can use to manipulate the sample just a little bit, to adjust it. It's called weighting. So we weight to make sure the sample is reflective of whatever the key characteristics are. They may be demographic, they may be political, they may be geographic, etc.

It gets harder when you're looking at subpopulations. So for example, a study of the Jewish American population is absolutely valid, but there can be a little bit of confusion where people think that the bigger the sample, the better it must be.

Let’s dig in a little there, in terms of the challenges of representative samples. 

Dahlia Scheindlin: What you really need to look at is [not how large the sample is but] whether the sample truly reflects the key demographic and other meaningful characteristics of the Jewish American population, or whatever population it is. And so for that, you have to actually have credible data showing the same basic breakdowns that I mentioned earlier, whether it's demographic breakdowns, age structure, geographic spread, educational attainment, political leanings, et cetera. And oftentimes the only way we know those things is through other surveys.

It's better to compare to real-time. The easiest example is probably the elections. If you draw a sample, you want to make sure that you have roughly the right political breakdown, otherwise your sample isn't going to be representative at the political level. And instead of comparing that to other surveys, you'd want to compare that to election results, which are real meaningful results, rather than just another representative sample. So those are kinds of things we look for in good survey research.

Any other characteristics you look for when assessing survey research?

Dahlia Scheindlin: Sampling is one of the most important, but there are other techniques that make a survey credible and ethical in terms of research. And I think maybe the second major one is question design and questionnaire design.

Lots of people understand the concept of a push poll, which is not really about examining people's attitudes, but changing their attitudes through questions that are disguised as examining and testing attitudes.

Now, I would distinguish between push polls and strategic polls. Because on campaigns, a key instrument that we have for building campaign messages are strategic polls. And this doesn't necessarily mean it has to be party politics, it could be any social group that wants to affect some sort of political change. So we're trying to look at how to change minds. It's part of the aim of the survey.

So [in the case of strategic polls] towards the end of a survey, you can test or develop questions that are intended to change minds. And you can be open about that and tell people, “Respondents, okay, now we're going to give you statements that are intended to change your mind this way or that way.” So it's transparent. And that's a more legitimate way of doing it, it's much more transparent, and it's just also methodologically more sound. The results will be more meaningful.

You mentioned that you're doing a lot of work right now in the public arena in terms of reading and interpreting other people's work. So when you are presented with a report on a public opinion study or survey, can you tell us a little bit about how you read it? What questions are you trying to answer for yourself? What supporting material are you looking for in the reporting?

Dahlia Scheindlin: Well, the very first thing I want is the original material. So when I read a news story or any sort of write-up, like a summary of a poll, I pretty much don't even pay attention to it unless I can get access at least to the full questions, and, ideally, the full results. So I prefer to have the actual data myself to look at and see what I see in the numbers.

A media article is designed to tell a story, which is legitimate. That's how people want to consume the news. But as a pollster, I don't want the story before I've read the data. I don't want somebody else's story. I don't really want somebody else's analysis.

I'd rather read the full question wording. That helps me decide if it was a legitimate professionally designed survey that was not intended to be manipulative. I'd rather look at that myself before my own thinking about it as biased by somebody else's interpretation. The first thing I'll do is look at the questions.

But the other reason why I go for the raw data is a matter of transparency. When I see a report, like a media story or some think tank that's reporting on a survey, the first thing I want to know is if it's transparent. Because if they are presenting an analysis, but they're not giving you the opportunity to find the data yourself… I think it's not transparent. And if it's not transparent, I tell people pretty much chuck it in the garbage.

Now, sometimes I understand that people don't have the wherewithal, or for whatever reason, they haven't immediately made [data] available or linked to full data.

If you're doing a survey of a special population, I want to know how do you know the real breakdown of that population, and how well are you able to replicate it in your representative sample?

Could you give us a little insight into how you read across multiple studies to get a picture?

Dahlia Scheindlin: Yeah, it's hard. I have guidelines, but I wouldn't say there is necessarily one set of rules on this.

What I like to do is take questions, if I see there's multiple studies and they're asking similar questions, even if they word them a little bit differently. In some ways, it's axiomatic that you're not supposed to take surveys that have different wording and compare the question, even if it's the same topic. But in a way, there's an advantage to that. A classic example of this is from the Israeli-Palestinian context. No matter how you ask a question about the two-state solution the breakdown of the population is pretty much the same. Which is telling. It means that people have a very fixed and sensitive understanding, and they've really thought it through. So when they hear the words “two-state solution,” they know exactly which side they're on. Even if you play with how the question is worded.

Then if you see that question wording, but every different formulation makes the response breakdown wildly different, then you say, well, this is a very malleable issue. Maybe people haven't really thought it through, maybe they haven't understood the issue, maybe the way it's presented can shed completely different light on whatever the topic is. And so that's where you have to be more careful, and you should not really be comparing surveys as if they mean the same thing. You can compare them, but only to point out that a different question wording will lead to a very different response.

I think one thing that I pay attention to that I feel like doesn't get enough play is just what questions are being asked. And so when I look at multiple surveys, I want to know: are they all asking the same questions? If so that's a pretty good indication of what this society is talking about. Or are they asking a whole different range of questions? And that can be really insightful, especially over time. If you look at how questions were asked 20, 30 years ago on an issue, if the wording has changed, it can be very indicative of how language has changed, how our understandings have changed, what are the acceptable things to talk about in the past versus now.

So I like to look at the questions. Just the questions being asked are as revealing as the actual breakdowns. And so I look for that also when I'm looking at multiple studies.

At CASJE we specifically focus on studies of the American Jewish population, which, as you noted earlier, is a small population, and a population that doesn't have the benefit as of a census or as many studies or as many cuts in to their attitudes as the much larger general population. So how much can we learn from one study or a couple of studies when we don't have the benefit of many studies to help us form that fuller picture?

Dahlia Scheindlin: I've seen a number of American Jewish studies over the years. They're not that uncommon. Usually there's a little extra wave ahead of elections and after elections. There have been studies conducted by Jewish federations, there are studies conducted by ADL, there are studies conducted by [political groups], and some of these are often conducted as part of or together with a representative sample of the American public.

Some of them will be attempting to represent the Jewish population of America. And some of them will be conducted side by side with a full sample so that you can actually compare.

There are quite a number of American Jewish surveys out there over the years. And again, the problem is to compare trends over time. You do have to be careful of what you're comparing.

For example, if you're looking at American Jews over time and you ask, “Do you consider yourself to be a Democrat or Republican? Did you vote for a Democrat or Republican? Do you approve or disapprove of a Democratic or Republican presidential candidate?” You'll pretty much get a similar breakdown. For the most part, you'll get about 70% of Jews who say they support the Democratic side of the question, and maybe 25 to 30% who support the Republican side. And that's pretty stable, no matter how the question is asked.

But when it comes to more sensitive issues like how we measure, I don't know, agreement with Israel's policies or things that are very nuanced, I would be careful of looking at longitudinal [findings]. I wouldn't look at the survey from 10 years ago about the Jewish population with a question that's worded about Israel in a certain way, and compare it to a question about Israel worded in a slightly different way today. Because you won't know if it's the question wording or the time, the phase in history, that's changed people's minds.

So if you were fielding a study of American Jews right now, what questions would you ask?

Dahlia Scheindlin: I would ask the extent to which they follow news from Israel and Palestine. I would ask to what extent they expect that to influence their vote for US politics. And then I would want to take those people who said they are following closely, and I would really like to ask them what they're following.

Are they following only Israel's perspective? Do they have any perspective on the Palestinian experience, population perspectives? And that's important because I think what American Jews, by and large maybe aren't really internalizing, is that every single thing that happens between the river and the sea, and I mean that in the geographic sense, not the political sense, involves Israelis and Palestinians alike. And what happens to Israelis has a profound impact on Palestinians. And what happens among Palestinians has a profound impact on Israelis. And therefore, I no longer really can, it's very hard for me to analyze this region without looking at both populations. And I think that especially for Americans planning on taking this issue into account when making major decisions about American politics, they should have as broad possible information as possible about all the things that really affect our lives here. And I think I suspect that for the most part, American Jews are much more attuned to Israel's side of the matter, which is fine, but it's partial.

Any last thoughts for people who want to be responsible readers of polls and public opinion research?

Dahlia Scheindlin: What I really think is that people need to look at polling critically. In other words, if you see one piece of data that is striking, interesting, shocking, I would never take it at face value. Ideally, you should have it confirmed by other data, even if it's roughly along the same lines.

There's a tweet going around about how majority of young Americans agree with this sentence. that Jews are an oppressor class and too powerful in the world. And just this one piece of data, this one data point went all around the internet, and nobody bothered to read the entire study, which was easily available, touch of a button. Nobody bothered to look at in the context of other data, other polls.

One finding could always be wrong. And so I would very much recommend that people never base their overall impression on one data point. Just like you wouldn't want to base a real impression on a single quote of how a politician thinks.

And ideally, only really trust a public opinion observation if you can access the full data yourself, because so often, media stories are written with somebody else's analysis in mind. And even worse, headline writers, that's the worst. Headline writers are the most guilty parties of misinterpreting polls, because they need a sensational headline. So I would just think of yourself as a critical consumer, not only when consuming the news, but when consuming polls in general. Just knowing how to read an article or read a number doesn't mean you've actually internalized the full context of what it means.