Search engines play one of the most significant roles in our technologically enabled lives by shaping how we conceptualize and interact with information, knowledge, wisdom, and arguably reality itself. They are our externalized reasoning machines, both facilitating our access to knowledge and quickly becoming our knowledge. They are where we go to research, clarify, and definitively answer our queries, which go on to form the substance of our opinions, views, and beliefs.
From the explicit knowledge acquired from the lost art of slow, considered research to the implicit knowledge lost in imagining what we don’t yet know, search engines are surreptitiously eroding the richness and diversity of our knowledge and lives. We reveal our deepest inner thoughts, fears, and desires to search-engine technologies, replacing the intimate human services otherwise offered by teachers, doctors, librarians, friends, confidants, psychiatrists, religious representatives, and respected elders.
This problem is underscored by worrying reports that document UK students citing “fake news” as objective facts in schoolwork and classroom debates, US sixth graders being taught “news literacy” out of necessity, and the global epidemic of a “post-truth” era that still finds no clear solutions or remedies. As we watch the next-generation—who have never known life without the internet—set adrift in a new world of inherent ambiguity, we urgently need to evolve search engines in parallel with the rapid pace of social progress.
Here are the problems that we must surmount if we are to continue creating and sharing tools to help amplify and advance knowledge, reasoning, critical thought, and creative thinking.
1: Search is in conceptual stasis
The search engine is one of the defining technologies of our modern age, yet it is one of our least evolved tools since its first inception. Between 1998 and 2017, Google Search has barely evolved as a information retrieval conceptual model. Although it wasn’t the world’s first search engine, Google is now the dominant force in search by a monopoly-esque margin, currently holding 80.52% of the market share. As the world’s largest search engine, Google defines the conceptual mental models we employ to access information and knowledge. Their competitors—such as Bing, Yahoo!, Baidu (China), Yandex (Russia), and Naver (South Korea)—largely follow and mimic Google’s conceptual models, which results in all search engines seeming functionally generic other than a few minor territorial or technical differences. Arguably, the only noticeable changes Google has made in the past 20 years have been iterative cosmetic changes to its logo/Google Doodle logos and the page layout. Compare this to the evolution of cell phone technology over the same time period—from bulky voice-and-SMS-only devices to the now ubiquitous smartphone, which is truly a pocket-sized personal computer.
2: Search is a black-box technology
Dominant search engines are the very definition of a “black-box technology”: As users we know the input (a search query) and output (the search results), but have no transparent knowledge of a search engine’s internal methodologies processing what happens between input and output. The question we increasingly need to ask ourselves is what exactly is driving the transformation from a query to a result, from our question to their answer. What belief systems, values, methodologies, technical limitations, logics, reasoning, ideologies, philosophies, political influences, and financial influences make up the contents of theses black boxes of transformation? We may never know.
3: Search is biased
No technology is neutral. Conducting a simple Google Image search on the term “beauty” will very quickly illustrate this—go try it. According to Google’s algorithms, beauty is rendered as purely relative to the human body, specifically the human face, the female face, the young female face, the young female face of defined body shape, and the young female face of defined body shape and racial profiles.
As such, the search is not only significantly gender, age, race, and body-shape biased, but it is also monolithically conceptually biased. There is no room in these results, by example, for a newborn baby, a stranger’s smile, a sunrise, or the paintings of William Turner. Every technology comes coupled with trade-offs and biases. The bias in this example is largely born of a commercial bias: The beauty industry invests more money and substance (which translates to content and data) into the web’s infrastructure than appreciators of newborn babies or Turner’s paintings. Therefore, beauty-industry-related results fast become a self-fulfilling prophecy.
4: Search is overly simplistic
Larry Page, Google’s co-founder, once described the perfect search engine as a machine that “understands exactly what you mean and gives you back exactly what you want.” The critical issue with this notion is that it engineers all of the information and knowledge we didn’t mean to view but might yet still enlighten us out of reality; it edits out all that we don’t want to hear but might need to hear.
5: Search’s reach and depth are a fallacy
The trade-off we accept from algorithm-driven search engines is a quickly and conveniently ordered results page that floats the most relevant information to the top and sinks all other information incrementally further to the bottom. Every search-results page is preceded with the total number of results found on that query—a figure very usually within the millions. This figure gives us a sense of reassurance that the search engine is doing its job wonderfully by both instantly accessing all the vast amounts of relevant information available to us and then organizing it into a useable, ordered state. But this figure is entirely arbitrary. Users wishing to access even a fraction of the depth alluded to would spend months clicking through each results page. For example, a search of “what is love” returns about 480,000,000 results. If we imagine each page took just five seconds to load and then we skim read each, it would take a user 11.5 days of consecutive clicking to reach just the one-millionth results page, an undertaking very few of us would willingly take on.
6: Search results’ methodology is a failing democracy
Within Google’s original philosophy, “Ten things we know to be true” (removed by Google in 2017 but archived here by archive.org), principle four states “Democracy on the web works.” This was the founding principle of Google PageRank, an algorithm that works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. PageRank was the primary means by which Google’s search results won the battle of relevancy and resulting market dominance. The principle followed the logic that the most linked-to web pages are most likely to be the most relevant, and hence would find themselves higher in the organized hierarchy of results. Every link was seen as a digital vote, and the logic of democracy governed that the most votes won the race to rule. This logic was always a flawed interpretation of democracy when applied to the web, at first by those crudely manufacturing links through automated link farming in order to improve an individual website’s profile, and later by seemingly highly sophisticated manipulations conducted to advance political and ideological agendas.
7: Search is gameable
In December 2016, an investigation by The Guardian found that Google Search was subject to manipulation by rightwing entities able to game the search autocomplete algorithms. They did this, according to research by Elon University assistant Professor Jonathan Albright, by creating an ecosystem of interlinked alternative sites and content that are so sophisticated that their influence bleeds into the wider web. This gaming has led to Google’s autocomplete finishing search queries with suggestions such as “are jews.. evil,” “are women.. evil,” and “do blacks.. commit more crime.” The issue was quickly rectified by Google, who released a statement saying, “We acknowledge that autocomplete isn’t an exact science and we’re always working to improve our algorithms.” This apparent gap between a tool perceived as objective and the officially acknowledged reality of a non-exact, highly subjective tool is ultimately fuelling our so-called “post-truth” condition.
8: Search is unaccountable
The seemingly objective results rendered by algorithm-powered search engines are currently subject to no self regulation, agreed moral or ethical conduct, or governing universal laws. Such unaccountability is unprecedented within a technology capable of significantly influencing elections, dictating objective accounts of current affairs, influencing societal norms, communicating interpretations of scientific findings, and steering consensus within public opinion.
9: Search is leading us into passive thinking
The “Google effect” describes the implications of a social and technological landscape where algorithms increasingly service access to collective knowledge, encapsulate our memories, and make decisions for us. We are now becoming ever more reliant upon the technologies’ powers of reasoning and less reliant upon our own. Hence, we quickly become less vigilant and increasingly passive in our judgment of the world around us.
Dr. Gary Klein, a research psychologist famous for pioneering in the field of naturalistic decision making, describes the problem as such: “When the algorithms are making the decisions, people often stop working to get better. As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.” Attributes of human cognition that have evolved over millennia, such as our ability to practice critical thought, divergent thinking, empathy, reflection, and introspection, are now being radically diluted and potentially sacrificed to our reliance on externalized reasoning.
10: Search is an extraction industry
Like fossil fuels, mining, and quarrying, the financial model currently sustaining internet search is that of an extraction industry. Rather than the physical materials of traditional extraction industries, search extracts its users’ data and metadata for monetization that is fueled by advertising revenues in an economic model commonly known as “surveillance capitalism.” The access to data and meta-data produced by our online behaviors and digital identities are now a commodity for international trade by global corporations, and as with every extraction industry, exploitation is inevitable.
* * *
Such a diagnosis shouldn’t leave us in complete pessimistic despair. Technology, like life itself, is subject to evolution. Darwin lives among the machines as well or organic life. And as both natural evolution and technological progress demand mutation and adaption, innovation and reinvention are continuous forces of time. New opportunities for change arise from each of the above issues—opportunities that will fill empty niches and find advocates ready to render new alternatives possible. A new wave of search engines is already putting user privacy at the fore. Some of them include DuckDuckGo, findx, common search, startpage, and many more.
By creating both the demand for change and viable alternatives to deliver change, we can shift consensus and provide far better navigational tools for our children. Ultimately technologies should always serve their users, rather than users serving technologies. By tolerating these issues today, we are simply passing them down to another generation.
You can follow Ted on Twitter. Learn how to write for Quartz Ideas. We welcome your comments at firstname.lastname@example.org.