Google released its “autocomplete” feature in 2004, and it was designed to streamline search. One only needed to enter the first few letters of a query, and Google would seem to magically find exactly the term you wanted. Today, the feature can seem somewhat invasive and creepy, but it’s a handy feature many users can’t seem to live without.
Google’s feature works based on the popularity of particular terms. It knows that a Google searching for “wea” probably wants weather, and more than likely wants the forecasts for today. It will return those queries logically, but it also tries to do some anticipating and that’s where things get borderline offensive.
Suggestions are based on real queries that people make, so Google is only basing it off of data it has seen in the wild. This has led to some very interesting, and troubling searches for research purposes. The most famous of those was documented in a Guardian article, where Google’s predictive search led the writer to a query regarding holocaust denial.
Google’s solution to this problem is to utilize a system of feedback, where users can report potentially offensive searches. Those queries won’t immediately vanish, but the user will be able to contribute to better policing of the system. It’s not really clear how this will help long term, but it will most likely impact Google’s algorithm by giving them more data about what users find offensive.
It’s also not clear whether it’s possible to game this system, and report terms that aren’t actually offensive but that users want to block. What if a politician spoke out against a particular group, for instance, and that group banded together to label searches for this person’s name offensive? Perhaps there is still a long way to go, but Google’s off to an interesting start in helping users police themselves.