By Kim Harris
The years 1981 and 1997 don’t have a lot in common. Both years followed presidential elections, but 1981 inaugurated a new president while 1997 welcomed back the leader from the previous term. Both years saw advancement in space exploration with 1981 witnessing the first space shuttle flight, while the Mars Pathfinder landed in 1997. However, it seems the differences between those two years far outweigh similarities.
The cultural environment across sports, entertainment, pop culture, economics, and politics shifted significantly during that 16-year span. One thing that didn’t change in that time, though, was the most common baby boy name. The name Michael held the top spot on boy baby naming charts in both years. Jennifer held the title for girls in ’81 before sinking in popularity, and in ’97 Emily climbed to the top.
In 2011, 14 years after Emily’s triumph, the blockbuster movie Moneyball chronicled the unlikely rise of the Oakland Athletics in their 2002 season, despite their comparatively modest budget. The movie tells the relatively true story of how general manager Billy Beane used expert statistical analysis to string together a team of unknowns who would go on to a record 20-game winning streak and capture the American League West Division title.
What do the most popular names from 1981 and 1997, and the 2002 A’s have in common? Research.
Every year the Social Security Administration compiles all live birth names given in the previous 12 months to publish their Baby Name Charts. At the same time, sports statisticians pore over pitching, batting, win percentage, errors, injuries, and countless other indicators to make predictions and recommendations for the upcoming season. Both analyses, baby names and baseball, are met with widespread anticipation and review by a significant portion of the American public.
The Rise of Research and Big Data
Americans love solid, reliable formulas and information. As research and big data become more available to the public, the hungrier we all become for stats and figures to provide insight about the changing world around us. From 2008 to 2019, the market research industry more than doubled in value, growing to more than $73.4 billion in annual revenue. North America accounts for 54 percent of that revenue, followed by Europe at 26 percent, according to www.statista.com.
Accumulation of information offers the illusion of more control. In business, if we better understand the market and consumer conduct, then we can sell more. In church leadership, if we can better understand the lost, our congregants, and trends in public attitudes and behavior, then our ministry can be more effective.
There is no doubt that thorough, conscientious research provides valuable clues and insight about the landscape and context in which we do ministry. Is there a point, though, where our reliance on and interpretation of this kind of information becomes irresponsible?
The Danger of Faulty Deductions
In his 2010 book Proofiness, Charles Seife argued more sophisticated and nuanced interpretation of all the research churned out each year was needed. The mathematician and journalist detailed a history of the use of numbers to convince, convict, and convert people to ways of thinking based simply on using a few numbers in your argument. After all, numbers don’t lie, right? At the beginning of his book, Seife issued this challenge to readers, “If you want to get people to believe something really, really stupid, just stick a number on it.” Of course, most of the time statistics are not used to deliberately mislead. But we must be cautious when evaluating statistics and their conclusions, and the sources they come from.
When collected with integrity, the numbers in and of themselves remain mostly neutral. But the deductions we make because of those numbers are where we often encounter problems. One of the most common examples of these problems is assuming that correlation always equals causation. For example, all those Jennifers born in 1981 came of age in 1999. That same year, Gallup reported that church membership was 70 percent of U.S. households, a number that had stayed essentially flat since they started measuring it in 1940. But by 2015, when all the Emilys born in 1997 came of age, church membership dropped to 55 percent of households. Emily held the top of the naming charts until 2004, and as more Emilys came of age, 2016–20, we saw the churched population drop to an all-time low of 47 percent.
With the clear negative correlation presented in this reputable, compelling data, the solution to bringing Americans back to church is simple: stop naming your daughters Emily.
I’m kidding, of course.
This silly, hyperbolic example demonstrates the risk of inappropriately interpreting information. The Social Security Administration and Gallup are generally viewed as well respected, reliable sources. The numbers presented in their findings are true, but these conclusions are extremely misleading. Numbers may not lie, but conclusions can. When we interact with data, we must guard against drawing our own conclusions or believing other conclusions based simply on correlation.
The Pitfalls of Confirmation and Rejection Bias
Another common two-part pitfall in data review is confirmation and rejection bias. As church leaders, we can responsibly engage with research to better understand the people to whom we’re ministering and preaching. Research groups like Barna, Gallup, and Pew offer valuable tools that can help inform that understanding. However, we must learn to use those tools wisely.
Responsible engagement with these surveys and poll results looks like responsible engagement with anything. We might read a whole page of Amazon reviews before buying a new air fryer. We might look at state test results in various school districts when anticipating a family move. Most importantly, we always look for the author’s intended meaning, historical context, and the anticipated audience when interpreting Scripture. In a similar way, we should diligently evaluate the overall context, questions, and conclusions offered by third-party research groups before blindly adopting their findings.
For example, in 2019, Barna president David Kinnaman published research in his book Faith for Exiles: 5 Ways for a New Generation to Follow Jesus in a Digital Babylon. The book focuses on evangelizing and discipling young adults who grew up in church and now fall somewhere on the spectrum of unbelievers to committed disciples. From his research, Kinnaman concluded that personal experience and encounter with Jesus are central to developing a deep, lifelong faith.
As committed believers, we likely are inclined to accept these findings because they are consistent with our own lived experiences. That’s confirmation bias. At the same time, we’re also likely to reject findings that run counter to our own experiences. That’s rejection bias. It’s important to recognize these biases in ourselves as well as in the research we consume. Kinnaman’s questions in this research evaluated how respondents answered questions about Jesus in their lives and their interaction with church, the Bible, and prayer. It could be argued, however, that because Kinnaman is a Christian, for whom personal interaction with Jesus is key, he could fall victim to a similar confirmation bias when writing the questions.
Ultimately, as leaders and pastors, we cannot control what research makes headlines, but we can control how we respond to it. Our responsibility lies in being mature consumers of all information by asking good questions, noting our biases, and seeking to understand sources. Our job is not to be savvy statisticians, but softhearted shepherds. Data should inform our ministry, but not direct it . . . the Holy Spirit and Scripture can do that.
Kim Harris directs advertising and marketing with Christian Standard Media and is communications director at The Crossing, a multisite church located in three states across the Midwest.