One of the videos included in the “Response to Resistance” survey that the Memphis Police Department has asked people to fill out includes a dramatized encounter between an officer and a man.

An online survey asking Memphians their opinions on the use of force by police will produce results of dubious value.

Memphis Police Director Michael Rallings has said he hoped 100,000 residents would take the “Response to Resistance”  “so we can have real data on how you feel.”  The deadline for responding to the survey is tomorrow. 

But because the survey relies on a non-random sample and the questions are written ambiguously, the Memphis community and the Memphis Police Department should treat any conclusions drawn with a high degree of skepticism.

The U.S. Supreme Court ruled in 1989 that in court a police officer’s use of force should be judged on whether a reasonable person could objectively conclude that the officer’s actions were reasonable in the context of the situation. But can the results of this survey offer the “real data” Rallings calls for to define what is objectively reasonable use of force to the community?

The survey was designed by Sam Faulkner, a former police officer with 30 years of experience who runs a training program and provides legal advice to law enforcement agencies, according to his website. He also has testified in more than 350 lawsuits “in defense of police and correctional officers and their respective organizations,” the website says.

He says he created the survey to try to develop national guidelines for how law enforcement officers should respond when individuals don’t comply with verbal commands, become aggressive or physically resist. His website says responses to the survey from law enforcement officers and civilians closely align.

Two factors help polling experts and social scientists determine whether a survey offers “real” or sound data from which to draw conclusions. One is how the respondents are chosen, generally called sampling.  The other is whether the survey’s questions are written to avoid ambiguity and bias as much as possible.

Is the sampling in the ‘Response to Resistance’ survey ideal? No.

The gold standard is random sampling, contacting likely respondents at random, usually by phone. When a random sample is large enough, typically at least 400, we can make inferences about the overall population based on the sample . 

Of course, no sample is perfect, which is why there’s always a margin of error. But, if a survey is to represent the feelings of the broader community with some degree of accuracy, it should be based on a random sample.

This survey doesn’t rely on  a random sample — respondents are whoever chooses to go to the website and completes the survey.

Even if 100,000 Memphians complete the survey, the results may or may not reflect the opinions of the population. Even such a large sample may be biased in a variety of ways —for example, toward those who follow the MPD on social media, who read traditional news media, or who are more worried about crime.

It’s worth noting that the MPD has used this survey before — for officers in 2016 and for residents in 2017. About 3,000 citizens responded to the survey, according to a slide presentation on the Response to Resistance website.

“Unfortunately, in 2017, very few members of our community participated,” Rallings said in September. “We want to know what the public perceives to be reasonable force used by an officer.”

Is the survey well-constructed or well-written? No.

To begin, the survey instructions concern only situations in which an officer has informed a person “that he is under arrest.” The instructions also ask the respondent to “keep in mind  ” when answering the questions that the arrest is “legal and lawful” and that  “the subject is intentionally resisting the officer’s attempts at control.”

In other words, these instructions introduce a pro-law enforcement bias, asking respondents to assume that the officer’s actions are and have been appropriate and the subject’s actions have not.

There’s no way for respondents to offer an opinion regarding use of force where the situation is less clear cut.

The bulk of the survey is divided into five “segments,” each with a video demonstrating two or more different law enforcement responses to one or more types of “resistance.” For each segment, respondents are first asked to offer a “yes” or “no” as to whether they “agree that it is reasonable for the officer to use the techniques shown in the video.”

Respondents are then asked separately for each law enforcement tactic the extent to which they agree or disagree that it is reasonable to use that tactic in response to the one or more resistance tactics in that segment.

Because of the way these questions are written, respondents are asked to answer multiple questions within a single question. In other words, these are (at least) “double-barreled questions”—a well-recognized problem in survey design. 

A basic example: “How satisfied were you with the speed and quality of the service?”

This actually asks two separate questions—about speed and quality. How do you answer if the service was good but slow? And how does the business owner interpret that answer? Does a “very satisfied” answer mean the person was happy with the speed of service or the quality?

A well-designed survey would ask two distinct questions to eliminate guesswork on the part of both respondent and the person who paid for the survey.

Similarly, the Response to Resistance survey asks respondents to provide, for example, a single ‘yes’ or ‘no’ to whether it’s acceptable to use pressure point pain or joint manipulation if a subject is refusing to move or to use a strike to muscle mass or take down tactics if a subject is pulling away from the officer. That’s at least five different questions rolled into one. 

The next question asks for a separate rating on a five-point scale for each law enforcement tactic. But these finer grain questions still require a single response to both a subject refusing to move or a subject pulling away — still double-barreled questions.

It seems likely that opinions for what is acceptable differ depending on whether the subject is refusing to move or is pulling away. This survey doesn’t allow for making any such distinctions.

Rallings’ desire to survey Memphians about their opinions of law enforcement use of force is admirable. However, not all surveys produce the kind of “real data” Rallings says he’s looking for.

Craig Stewart is an associate professor of communication in the Department of Communication & Film at the University of Memphis. His research focuses on science communication, discourse analysis, and social identity. 

This story is brought to you by MLK50: Justice Through Journalism, a nonprofit newsroom focused on poverty, power and policy in Memphis. Support independent journalism by making a tax-deductible donation today. MLK50 is also supported by these generous donors.

Got a story idea, a tip or feedback? Send an email to

Craig Stewart is an associate professor of communication in the Department of Communication and Film at the University of Memphis. His research focuses on science communication, discourse analysis and...