By comparing results between leading search engines, we identify patterns in their algorithmic search listings. We find that each search engine favors its own services in that each search engine links to its own services more often than other search engines do so. But some search engines promote their own services significantly more than others. We examine patterns in these differences, and we flag keywords where the problem is particularly widespread. Even excluding "rich results" (whereby search engines feature their own images, videos, maps, etc.), we find that Google's algorithmic search results link to Google's own services more than three times as often as other search engines link to Google's services. For selected keywords, biased results advance search engines' interests at users' expense: We demonstrate that lower-ranked listings for other sites sometimes manage to obtain more clicks than Google and Yahoo's own-site listings, even when Google and Yahoo put their own links first.
In general, it is difficult to assess the bias of a search engine. Who can say whether a search engine shows the "right" listings for a given search term? If listings reflect a search engine's "subjective opinion," as search engines have claimed in litigation, then there is no way to ever argue -- not to mention prove -- that listings are in some way improper. But in certain circumstances, we believe an inference of bias is amply supported by analysis of search results and, especially, comparisons between search engines.
Our work follows a series of others also concerned about search engine bias. In general, their analyses are grounded in change over time, typically combined with incentives for bias (for example, a search engine's possible desire to suppress potential competitors or to claim new sectors for itself). We credit their approach, which we examine and discuss at length in appendix 1, Others' Concerns about Search Engine Bias. Our project seeks to extend their analyses by invoking other sources of data, including cross-checking results from multiple search engines as well as analyzing click-through rates.
Although we analyze the degree of bias across each of the top five search engines, we are particularly interested in the extent of bias at Google. We focus on Google for two reasons. First, Google widely claims that its algorithmic results are "algorithmically-generated", "objective", and "never manipulated"; other search engines make such claims rarely or never. Second, Google's dominant market share (66% of U.S. core searches, and 95%+ in many European countries) means that any bias at Google has a much larger impact than bias at another search engine.
Rank | Yahoo | Bing | |
---|---|---|---|
"mail" | |||
1 | mail.google.com | mail.yahoo.com | mail.yahoo.com |
2 | www.mail.com | www.mail.com | www.dmnews.com |
3 | mail.yahoo.com | www.gmail.com | en.wikipedia.org |
"email" | |||
1 | mail.google.com | mail.yahoo.com | mail.yahoo.com |
2 | mail.yahoo.com | www.mail.com | www.email.com |
3 | www.mail.com | www.gmail.com | en.wikipedia.org |
"calendar" | |||
1 | www.timeanddate.com | calendar.yahoo.com | www.timeanddate.com |
2 | www.google.com | calendar.yahoo.com | www.google.com |
3 | download-llnw.oracle.com | developer.yahoo.com | en.wikipedia.org |
"chat" | |||
1 | www.chat-avenue.com | asia.chat.yahoo.com | messenger.yahoo.com |
2 | chatroulette.com | www.chat.com | www.chat.com |
3 | www.google.com | chat.icq.com | chat-avenue.com |
"maps" | |||
1 | maps.google.com | maps.yahoo.com | maps.google.com |
2 | www.mapquest.com | maps.google.com | maps.yahoo.com |
3 | maps.yahoo.com | www.maps.com | www.maps.com |
"video" | |||
1 | video.google.com | video.yahoo.com | video.google.com |
2 | www.cnn.com | www.youtube.com | en.wikipedia.org |
3 | www.youtube.com | video.google.com | video.yahoo.com |
full table in appendix 2 |
Result Source | ||||||
---|---|---|---|---|---|---|
Search Engine | AOL | Ask | Bing | Yahoo | Other | |
Single Top Algorithmic Result | ||||||
AOL (Google) | 0 | 0 | 0 | 11 | 2 | 18 |
Ask | 0 | 1 | 0 | 6 | 7 | 18 |
Bing | 0 | 0 | 0 | 5 | 5 | 22 |
0 | 0 | 0 | 11 | 1 | 20 | |
Yahoo | 0 | 0 | 0 | 3 | 11 | 18 |
Top 3 Algorithmic Results | ||||||
AOL (Google) | 0 | 0 | 2 | 22 | 12 | 57 |
Ask | 0 | 1 | 1 | 14 | 11 | 69 |
Bing | 0 | 0 | 0 | 13 | 14 | 69 |
0 | 0 | 2 | 21 | 10 | 63 | |
Yahoo | 0 | 0 | 0 | 17 | 16 | 63 |
Entire First Page of Algorithmic Results | ||||||
AOL (Google) | 3 | 1 | 10 | 31 | 17 | 248 |
Ask | 0 | 1 | 3 | 19 | 16 | 242 |
Bing | 1 | 0 | 11 | 26 | 19 | 262 |
2 | 1 | 8 | 32 | 15 | 261 | |
Yahoo | 5 | 1 | 9 | 28 | 37 | 233 |
Yahoo | ||
---|---|---|
Top result | 3.1 | 3.3 |
(0.02) | (0.01) | |
Top 3 results | 1.6 | 1.4 |
(0.14) | (0.26) | |
First page | 1.3 | 2.3 |
(0.27) | (0.00) |
We seek to measure search engine bias through automated comparison of algorithmic links. We do not aspire to say whether individual sites have suffered search engine penalties. Rather, we investigate a question we view as easier and more amenable to automated analysis: Whether search engines' algorithmic results favor their own services, and if so, which search engines do so most, to what extent, and in what substantive areas.
We begin with motivating examples, as shown in the top section of Table 1. Search for "mail" or "email" on Google, Yahoo, and Bing, and you will see strikingly different results. (These results are based on a large number of automated searches performed in August, 2010. Some results have changed since then.) Notice that both Google and Yahoo placed their own email services first (denoted by red highlighting in the table).
The favored placement of own-service links continues well beyond the keywords "mail" and "email." The remainder of Table 1 presents five other common searches distinctively featuring own-service links, while appendix 2 reports similar results for a variety of additional search terms.
These examples suggest a pattern: Google and Yahoo tend to put their "house brand" services first, while Microsoft (Bing) does so much less. Taking seriously the suggestion that top algorithmic links should in fact present the most relevant and useful results for a given search term, it is hard to see why results would vary in this way across search engines. Certainly search engines' distinct ranking algorithms could yield different site orderings. But if algorithmic results are truly objective measurements of a site's popularity or relevance to a given search, one might think they should be random. What would cause Google to place Gmail first, while Yahoo places Yahoo Mail first? The simplest and most straightforward explanation is that both Google and Yahoo offer preferred placement to listings from their respective services.
To formalize our analysis, we formed a list of 32 search terms for services commonly provided by search engines, such as "email", "calendar", and "maps". We searched for each term using the top 5 search engines: Google, Yahoo, Bing, Ask, and AOL. We collected this data in August 2010.
We preserved and analyzed the first page of results from each search. Most results came from sources independent of search engines, such as blogs, private web sites, and Wikipedia. However, a significant fraction – 19% – came from pages that were obviously affiliated with one of the five search engines. (For example, we classified results from youtube.com and gmail.com as Google, while Microsoft results included msn.com, hotmail.com, live.com, and Bing.)
Table 2 reports the number of times each search engine linked to algorithmic results affiliated with a search engine (either itself or one of the others). This tabulation is displayed separately for the top algorithmic result, the top three results, and the the first full page of results. Cells highlighted in red correspond to cases in which search engines link to their own results. (Note that AOL's links to Google-affiliated results are also highlighted, since AOL Search is powered by Google.)
We see some evidence of own-service bias within the first page of results (the bottom panel of Table 2): Yahoo places Yahoo destinations in the first page of search results 37 times, substantially more than other search engines. (Bing, with 19 links to Yahoo, is closest -- linking to Yahoo about half as often as Yahoo links to itself. Google links to Yahoo services just 15 times, about 40% often as Yahoo links to itself.) Google and AOL also link to Google results more frequently than other search engines, though the spread is less dramatic.
We see stronger evidence of own-service bias when we examine on the single most prominent algorithmic result for each keyword. Google and Yahoo are the only search engines whose pages regularly appear in the top 3 results of any search engine, or whose pages are ever listed as the top result (with the exception of a single own-page link at Ask.com). The top two sections of Table 2 show pronounced evidence of bias at both Google and Yahoo: For example, Google and AOL each list pages from Google as the first result 11 times, about twice as frequently as other search engines list Google pages in the first result. Smilarly Yahoo lists Yahoo pages as the first result 11 times. In contrast, results from Bing and Ask show little evidence of bias.
Even if search engines were unbiased, we would expect some random variation across search engines. Can random variation explain the patterns shown in Table 2? We use logistic regressions to investigate.
We form a dataset with one row per search result. For each row, one field reports the search engine on which the search was performed. A second field classifies the source of the result -- identifying results affiliated with one of the search engines.
We then run regressions as follows:
Pr(isGoogleResult) ~ isGoogleSearch and Pr(isYahooResult) ~ isYahooSearch
The labels, isGoogleResult and isGoogleSearch denote dummy variables indicating whether a result links to Google, and whether the initial search generating the result was performed on Google (and similarly for Yahoo).
These regression specifications let us compare the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine's pages. If search engine X links to its own pages significantly more often than other search engines link to X's pages, that's prima facie evidence of bias.
In this analysis, we drop searches using AOL to avoid double-counting Google results. If we instead count AOL searches as Google searches, Google's level of bias appears higher.
We restrict analysis to Google and Yahoo because results promoting other search engines (or services offered by other search engines) appear too rarely to generate precise coefficient estimates. Additionally, Table 2 shows little evidence of bias among other search engines, and extensions to Table 3 (to consider other search engines) show little evidence of bias elsewhere.
Table 3 presents the results of these logistic regressions. The table reports odds ratios, so an estimate of 1 would indicate that the ratio of Google links to non-Google links is the same among Google searches as among searches using other engines. (This is the result we would expect absent any bias.)
Table 3 shows that in every case the estimated odds ratio is greater than 1, and in many cases the odds ratio is statistically significantly different from 1 (with a low P-value). That is, it is extremely unlikely that search engines would return such results (which sharply overrepresent their own services) if each search engine's algorithm in fact treats other search engines' services as favorably as it treats its own.
The strongest results in Table 3 occur in the top row, analyzing the top-most listings from each search engine. Table 3 indicates that both Yahoo and Google are much more likely to place their own pages first, relative to other search engines, and these differences are significant at the 1% level for Yahoo and the 2% level for Google.
We focus our analysis of bias on these portal services keywords because portal service terms offer a particularly clear opportunity for search engines to favor their own services. However, our methodology is equally able to examine a similar bias in a broader set of search terms. We discuss this possibility, and similar analyses using other sets of keywords, in Appendix 3.
As the introduction mentions, standard algorithmic results are not the only content that appears on search result pages. Rich results -- featuring multimedia content such as images and videos -- also appear prominently and often link to destinations affiliated with the search engine on which they appear. We focus on algorithmic results here, as bias among algorithmic results is arguably more difficult for a user to discern. Our Appendix 4 explores patterns in rich results, comparing rich results across search engines, tracking how often rich results yield further own-site links, and measuring the extent to which rich results link to a search engine's own services.
In principle, a search engine might feature its own services because its users prefer these links. For example, if users of Google's search service tend to click on algorithmic links to other Google services, whereas users of Yahoo search tend to click algorithmic links to Yahoo services, then each search engine's optimization systems might come to favor their respective affiliated services. We call this the "user preference" hypothesis, as distinguished from the "bias" theory set out above.
To test the user preference and bias hypotheses, we use data from two different sources on click-through-rate (CTR) for searches at Google, Yahoo, and Bing. Using CTR data from comScore and another service that (with users' permission) tracks users' searches and clicks (a service which prefers not to be listed by name), we analyze the frequency with which users click on search results for selected terms. The data span a four-week period, centered around the time of our automated searches.
In click-through data, the most striking pattern is that the first few search results receive the vast majority of users' clicks. Across all search engines and search terms, the first result received, on average, 72% of users' clicks, while the second and third results received 13% and 8% of clicks, respectively.
This concentration of users' clicks makes it difficult to disprove the user preference hypothesis. For example, as shown in Table 1, Google and Yahoo each list their own maps service as the first result for the query "maps". Our CTR data indicates that Google Maps receives 86% of user clicks when the search is performed on Google, and Yahoo Maps receives 72% of clicks when the search is performed on Yahoo. One might think that this concentration is evidence of users' preference for the service affiliated with their search engine. On the other hand, since clicks are usually highly concentrated on the first result, it is possible that users have no such preference, and that they are simply clicking on the first result because it appears first. Moreover, since the advantage conferred by a result's rank likely differs across different search queries, we do not believe it is appropriate to try to control for ranking in a regression.
Nevertheless, there is one CTR pattern that would be highly suggestive of bias. Suppose we see a case in which a search engine ranks its affiliated result highly, yet that result receives fewer clicks than lower results. This would suggest that users strongly prefer the lower result -- enough to overcome the effect of the affiliated result's higher ranking.
As it turns out, we sometimes observe exactly this pattern in the CTR data. The strongest example for Google is the term "email." Gmail, the first result, receives 29% of users' clicks, while Yahoo mail, the second result, receives 54%. Across the keywords we looked at, the top-most result usually receives 5.5 times as many clicks as the second result, yet here Gmail obtains only 53% as many as Yahoo. Nor was "email" the only such term where we found Google favoring its own service; other terms, such as "mail", exhibit a similar inversion for individual days in our data set, though "email" is the only term for which the difference is large and stable across the entire period.
We also found similar cases of inversion at Yahoo. For example, for the term "video," Yahoo lists video.yahoo.com as the first search result, but this site receives just 21% of clicks, compared to 39% for youtube.com (even though YouTube appears in second place). So too for the term "pictures": Yahoo's flickr.com is listed first, yet receives fewer clicks than images.google.com, the second result.
In light of the strong relationship between listing order and CTR, it is no surprise that these inversions are rare. But the fact that we see these inversions at all indicates that the apparent bias from Tables 1 and 2 cannot be explained by heterogeneous user preferences across search engines.
Google typically claims that its results are "algorithmically-generated", "objective", and "never manipulated." Google asks the public to believe that algorithms rule, and that no bias results from its partnerships, growth aspirations, or related services. We are skeptical. For one, the economic incentives for bias are overpowering: Search engines can use biased results to expand into new sectors, to grant instant free traffic to their own new services, and to block competitors and would-be competitors. The incentive for bias is all the stronger because the lack of obvious benchmarks makes most bias would be difficult to uncover. That said, by comparing results across multiple search engine, we provide prima facie evidence of bias; especially in light of the anomalous click-through rates we describe above, we can only conclude that Google intentionally places its results first.
While Google usually promises unbiased search results, we know of one context in which a senior Google executive admitted that it is company policy to give its own services preferred placement. At a 2007 conference, Google's Marissa Mayer commented: "[When] we roll[ed] out Google Finance, we did put the Google link first. It seems only fair right, we do all the work for the search page and all these other things, so we do put it first... That has actually been our policy, since then, because of Finance. So for Google Maps again, it’s the first link." We credit Mayer's frank admission, and our analysis is consistent with her description of Google's practices. But we're struck by the divergence between her statement and Google's public claims in every other context.
Advertisers and publishers have realized that search is valuable for the huge dollars search ads can generate. But through search result bias, search engines can also affect where users go online -- including sending users to a search engine's own sites, and making it importantly easier for a widely-used search engine to establish such ancillary sites than for any upstart to develop a similar site. Uncovering this favoritism remains difficult, as is verifying what has occurred and why. Consumer Watchdog's recent Traffic Report offered one approach -- flagging changes in Google's rich results and other links, and showing drops in traffic to affected sites during the same period. But as Google becomes even more dominant, we envision substantially greater investigation of the effect of Google's linking policies, ultimately including deeper outside verification and oversight.
Over the past three decades, the US Department of Justice and Department of Transportation have intervened when customer reservation systems -- the electronic intermediaries linking airlines to travel agents -- gave preferred treatment to their owner and/or partner airlines. For example, a GDS might list its owner's flights first even if another carrier had a cheaper fare and/or a nonstop -- a practice that invited agents to make suboptimal choices. Litigation put an end to this practice -- prohibiting reservation systems from considering carrier identity or ownership when ranking flights. In the context of web search, such a rule would be harder to enforce because web page rankings depend on more factors than flight rankings and because web search algorithms are notoriously opaque. But the principle stands. In the long run, just as Windows source code and APIs are subject to outside scrutiny, we expect that search algorithms will require similar external review. Last month the European Commission announced an investigation of biased results at Google, including "alleged preferential placement of Google's own services." We credit that effort, and our analysis suggests that the EC's investigation will indeed reveal that Google has intentionally put its own links first.
Posted: January 19, 2011