Bookmark and Share

2017 Milwaukee Police Satisfaction Survey – Frequently Asked Questions

The Milwaukee Fire and Police Commission (FPC) is pleased to see so much interest in the recently released police satisfaction survey report. We hope that interested residents will take the time to read the full police satisfaction survey report or at least browse the report overview that we have produced. If community members have questions about the survey that are not answered here please contact our office for assistance at fpc@milwaukee.gov or 414-286-5000.

 

Why was this survey performed?
The FPC had the initial round of this survey performed in 2014 in an attempt to provide an additional accountability measure for our city’s largest department. With the urging and support of then Common Council President Michael Murphy and Mayor Tom Barrett, the FPC contracted with an independent survey research institution to measure city resident’s satisfaction with the Milwaukee Police Department (MPD) in a variety of dimensions. The goal was to have a measure of the performance of the police department beyond the crime statistics; one that could measure what our community thought of the work of this important agency.
 
The intent of the 2014 survey was that it would be replicated in future years so that our city could judge improvements or declines in satisfaction and use that information to influence policy decisions. The 2017 survey is the first such replication of this tool, and if the city continues to find value in this work we hope that in future years we will continue to perform this measure.
How can I trust that the survey report is truthful and not a political tool?

The FPC conducted this survey first in 2014, and in both that instance and in 2017 the FPC contracted with respected, professional and experienced public institutions to perform the work. These institutions employ professional scientists who utilize evidence-based survey techniques and peer-viewed statistical analysis techniques to summarize and provide insight into the general opinion of the City of Milwaukee.

An average measure of the entire population is not meant to replace the truth and meaning in anyone’s personal experience – it is meant to stand along with the truths of our daily lives and provide perspective. For any measure for which a majority of residents feel a certain way, it does not at all discount the remaining residents whom feel the opposite. For instance, because the survey showed 73% overall satisfaction with the MPD it does not mean improvements to the MPD are not needed – there is still 27% of our city whom clearly do not share that perception and this work helps to understand why.

Furthermore, there are significant racial disparities highlighted in the survey that require action and accountability. In fact, the survey illustrates, that:

  • While there is a 73% overall city-wide satisfaction, among African-Americans there is only 60% satisfaction, or 40% dissatisfaction.
  • When measuring perception of compassion shown, 76% of white respondents were satisfied while only 45% of black respondents were satisfied.
    • Looking closer at that data will show that while 43% of white respondents were “very satisfied”, 42% of black respondents were “not at all satisfied”. This is a mirror image in the two extremes of satisfaction responses, indicating that white and black residents are having literally opposite perceptions of compassion shown.
  • When measuring perceived legitimacy of police initiated contacts, white residents have widely different perceptions compared to black and other non-white residents. Over half of our city’s black residents do not believe they are being stopped by the police for legitimate reasons, while only 15% of white residents feel that same way.

Neither the content, the analysis, nor the timing of the report release were controlled by the FPC nor MPD. While surveys are not and cannot be perfect measures, they are the best tool available if you want to gauge public opinion; there is a long history of research that supports their value and details their shortcomings.

How can the approximately 1,300 people surveyed really describe a city of over 600,000?

Because it is impractical to question every person in the city, we must rely on a random sample of our population if we wish to measure the satisfaction of city residents. An analogy can illustrate how this works. Say we have a jar with 600,000 jellybeans in it, and some were red and some were blue. If you only picked out ten jellybeans it is true that you don’t have much confidence that those 10 represent the entire jar. As you pick out more and more you get more confident that the proportion of red and blue in what you picked might accurately represent the entire jar.

Math has equations that define the how much confidence you can have in your estimate depending on how many jellybeans you pick out. You never get to 100% confidence (0% error) until you pick out every jellybean, but you can get down to a low error by picking out what to some are a surprisingly small number of jellybeans. For a jar the size of Milwaukee (about 600,000), the math proves that you’d have to pick out about 1000 jellybeans to have an error of about 3%.

The long history of evidence-based research in the topic of sampling also proves that if some of those jellybeans slip out of your hands while you pick them and you can’t count them, it doesn’t impact your final estimation so long as you’re picking randomly and your final number of jellybeans is sufficiently high. That research shows that even if some people hang up on you, so long as you keep randomly picking until you hit your target number of interviews you will have a valid estimate within the mathematical error.

Does the report actually represent Milwaukee’s demographics?

Refer to page 48 of the report for the age and race breakdown of the respondents. It shows that the respondents were very reasonably representative by race and that the respondents skewed older. The authors of the report used widely accepted industry standard methodologies to weight the responses of over and underrepresented populations to ensure that the reported results were representative.

Does the survey break down the results by neighborhood?

The full report provides analysis of a number of variables broken down by police district in pages 41 - 46. Because phone numbers called were selected randomly, every person in every neighborhood of the city had an equal chance to be included in the sample. The variables that were chosen to highlight in that section of the report were selected by the report authors as the most relevant. Neither the FPC nor MPD told the authors which variable to analyze in the district-to-district comparisons.

The police district unit of geography was chosen in order to strike a balance in the cost of the survey to taxpayers. If the researchers were to attempt to obtain meaningful data at smaller geographical units (such as zip codes, neighborhoods or aldermanic districts) the number of respondents would have had to increase considerably, and thus also the cost of the project.