Knowing Certain Trademark Ads Were Confusing, Google Sold Them Anyway — for $100+ Million

Disclosure: I serve as a consultant to various companies that compete with Google. But I write on my own — not at the suggestion or request of any client, without approval or payment from any client.

When a user enters a search term that matches a company’s trademark, Google often shows results for the company’s competitors. To take a specific example: Searches for language software seller "Rosetta Stone" often yield links to competing sites — sometimes, sites that sell counterfeit software. Rosetta Stone think that’s rotten, and, as I’ve previously written, I agree: It’s a pure power-play, effectively compelling advertisers to pay Google if they want to reach users already trying to reach their sites; otherwise, Google will link to competitors instead. Furthermore, Google is reaping where others have sown: After an advertiser builds a brand (often by advertising in other media), Google lets competitors skim off that traffic — reducing the advertiser’s incentive to invest in the first place. So Google’s approach to trademarks definitely harms advertisers and trademark-holders. But it’s also confusing to consumers. How do we know? Because Google’s own documents admit as much.

Today Public Citizen posted an unredacted version of Rosetta Stone’s appellate brief in its ongoing litigation with Google. Google had sought to keep confidential the documents that ground district court and appellate adjudication of the dispute, but now some of the documents are available — giving an inside look at Google’s policies and objectives for trademark-triggered ads. Some highlights:

  • Through early 2004, Google let trademark holders request that ads be disabled if they used a trademark in keyword or ad text. But in early 2004, Google determined that it could achieve a "significant potential revenue impact" from selling trademarks as keywords. (ref)
  • In connection with Google’s 2004 policy change letting advertisers buy trademarks as keywords, Google conducted experiments to assess user confusion from trademarks appearing in search advertisements. Google concluded that showing a trademark anywhere in the text of an advertisement resulted in a "high" degree of consumer confusion. Google’s study concluded: "Overall very high rate of trademark confusion (30-40% on average per user) … 94% of users were confused at least once during the study." (ref)
  • Notwithstanding Google’s 2004 study, Google in 2009 changed its trademark policy to permit the user of trademarks in advertisement text. Google estimated that this policy change would result in at least $100 million of additional annual revenue, and potentially more than a billion dollars of additional annual revenue. Google implemented this change without any further studies or experiments as to consumer confusion. (ref)
  • Google possesses more than 100,000 pages of complaints from trademark holders, including at least 9,862 complaints from at least 5,024 trademark owners from 2004 to 2009. (ref)

Kudos to Public Citizen for obtaining these documents. That said, I believe Google should never have sought to limit distribution of these documents in the first place. In other litigation, I’ve found that Google’s standard practice is to attempt to seal all documents, even where applicable court rules require that documents be provided to the general public. That’s troubling, and that needs to change.

Hard-Coding Bias in Google “Algorithmic” Search Results

I present categories of searches for which available evidence indicates Google has “hard-coded” its own links to appear at the top of algorithmic search results, and I offer a methodology for detecting certain kinds of tampering by comparing Google results for similar searches. I compare Google’s hard-coded results with Google’s public statements and promises, including a dozen denials but at least one admission. I conclude by analyzing the impact of Google’s tampering on users and competition, and by proposing principles to block Google’s bias.

Details, including screenshots, methodology, proposed regulatory response, and analogues in other industries:

Hard-Coding Bias in Google “Algorithmic” Search Results

A Closer Look at Google’s Advertisement Labels

Google's tiny 'Ads' labelGoogle’s tiny ‘Ads’ label

The FTC has called for “clear and conspicuous disclosures” in advertisement labels at search engines, and the FTC specifically emphasized the need for “terms and a format that are easy for consumers to understand.” Unfortunately, Google’s new advertisement labels fail this test: Google’s “Ads” label is the smallest text on the page, far too easily overlooked. (Indeed, as I show in the image at left, the “Ads” label substantially fits within an “o” in “Google.”) Meanwhile, Google now merges algorithmic and advertisement results merged within a single set of listings; Google’s “Help” explanations are inaccurate; and Google uses inconsistent labels mere inches apart within search results, as well as across services.

Details, including the shortfalls, screenshots, comparisons, and proposed alternatives:

A Closer Look at Google’s Advertisement Labels

.

Labels and Disclosures in Search Advertising with Duncan Gilchrist

Disclosure: I serve as a consultant to various companies that compete with Google. But I write on my own — not at the suggestion or request of any client, without approval or payment from any client.

Search engines have long labeled their advertisements with labels like “Sponsored links”, “Sponsored results”, and “Sponsored sites.” Do users actually know that these labels are intended to convey that the listings are paid advertisements? In a draft paper we’re posting today, Duncan Gilchrist and I try to find out.

“Sponsored Links” or “Advertisements”?: Measuring Labeling Alternatives in Internet Search Engines

In an online experiment, we measure users’ interactions with search engines, both in standard configurations and in modified versions with improved labels identifying search engine advertisements. In particular, for a random subset of users, we change “sponsored link” labels to instead read “paid advertisement.” We find that users receiving the “paid advertisement” label click 25% to 33% fewer advertisements and correctly report that they click fewer advertisements, controlling for the number of advertisements they actually click. Results are most pronounced for commercial searches, and for users with low income, low education, and little online experience.

We consider our findings particularly timely in light of Google’s change, just last week, to label many of its advertisements as “Ads.” On one view, “Ads”” is an improvement – probably easier for unsophisticated consumers to understand. Yet it’s a strikingly tiny label – the smallest text anywhere in Google’s search results, and about a quarter as many pixels as the corresponding disclosure on other search engines. As our paper points out, FTC litigation has systematically sought the label “Paid Advertisement, and we still think that’s the better choice.

Tying Google Affiliate Network

Disclosure: I serve as co-counsel in unrelated litigation against Google, Vulcan Golf et al. v. Google et al. I also serve as a consultant to various companies that compete with Google. But I write on my own — not at the suggestion or request of any client, without approval or payment from any client.

In one of the few areas of Internet advertising where Google is not dominant – where just three years ago Google had no offering at all – Google now uses tying to climb towards a position of dominance. In particular, using its control over web search, Google offers preferred search ad placement and superior search ad terms to the advertisers who agree to use Google Affiliate Network. Competing affiliate networks cannot match these benefits, and Google’s bundling strategy threatens to grant Google a position of power in yet another online advertising market.

Google shows algorithmic search results at the left side of users’ screens, while Google’s “AdWords” ads appear at the right and, often, top. Historically, Google has sold search ads on a cost-per-click basis: An advertiser is charged each time a user clicks its ad. With these offerings, Google has grown to a position of dominance in search and in search advertising — 77% share of U.S. web search in the US, with even higher levels in other countries.

While Google dominates online search, Google to date has made less headway in the area of affiliate marketing, an approach to online advertising wherein small to midsized sites (“affiliates”) receive payments paid if users click links and make purchases from the corresponding merchants. For example, Gap pays a 2% to 4% commission if a user clicks an affiliate link to Gap and goes on to make a purchase. While almost all of the web’s largest merchants run affiliate programs, as of the start of 2007 Google offered no affiliate marketing services. Only through its mid-2007 acquisition of DoubleClick did Google obtain an affiliate marketing program, then called Performics and now renamed Google Affiliate Network (GAN). But Google’s affiliate network began in third place in the US market — behind larger competitors Commission Junction and LinkShare.

Google now grants GAN advertisers preferred placement in search results. Notice that the three GAN ads appear with images, whereas ordinary AdWords ads show only text. And Google places all GAN image ads at the top of the right rail -- above all right-side AdWords ads. Beginning in November 2009, Google’s Product Listing Ads service gave GAN major advantages over competing affiliate networks. Within search ads, Google now includes listings not just to Google’s AdWords pay-per-click advertisers, but also to GAN advertisers. Through these placements, Google offers GAN advertisers four striking and valuable benefits:

  • Image ads. AdWords advertisements show only text. But GAN advertisements include an image — making GAN offers stand out in search results. See the three image ads highlighted in red in the screenshot at right.
  • Preferred placement. AdWords advertisements are ordered, Google says, based on how much each advertiser bids as well as Google’s assessment of ad relevance, click-through rate, and other factors known only to Google. But in my testing, all GAN ads appear at the top of the “right rail” of side listings — prominent, highly visible screen space that gets more attention than any AdWords listings below. Indeed, by pushing AdWords ads further down the page, GAN ads reduce the value of the AdWords slots. In the screenshot at right, notice that all three GAN image ads appear above all the right-rail AdWords ads.
  • Conversion-contingent payment. AdWords advertisers continue to pay on a per-click basis, incurring costs as soon as a user clicks a link. In contrast, GAN advertisers only have to pay if a user clicks a link and purchases a product.
  • Preferred payment terms. Because AdWords advertisers pay as soon as a user clicks, they must pay for users’ clicks even if servers malfunction, even if credit card processors reject users’ charges, and even if users return their orders or initiate chargebacks. In contrast, in all these circumstances, GAN advertisers incur no advertising costs at all.

I expect Google will argue that it is within its rights to package, bundle, and tie its products as it sees fit. I disagree. Here, Google ties its search offering to its affiliate network without an apparent pro-competitive purpose but with obvious anti-competitive effects. In particular, tying affiliate network services to preferred search ad format and placement gives GAN an advantage over competing affiliate networks, without efficiencies or other countervailing benefits to users or advertisers.

Furthermore, there is no plausible justification for providing image ads only to GAN advertisers or for granting all GAN ads positions above all right-side AdWords ads. To the contrary, Google could easily allow all AdWords ads to include images, and Google could instead intersperse GAN ads (and ads from other affiliate networks) among AdWords advertisements in whatever order auctions and algorithms fairly deem optimal. Those would be the natural product design decisions if Google genuinely sought to include images wherever useful and if Google genuinely sought to include affiliate ads whenever relevant. Because Google instead reserves these benefits for GAN advertisers, the natural inference is that Google reserves special rewards for advertisers choosing GAN — benefits that come at the expense of genuine competition in affiliate marketing services.

In the remainder of this piece, I discuss why the public should be concerned about Google’s tying tactics, then assess Google’s tying-based promotion of its various other products. I conclude with brief policy prescriptions.

Cause for Concern

I see four major reasons for concern in Google’s decision to tie GAN to preferred placements, format, and terms in sponsored search.

First, GAN’s tying threatens to extend Google’s dominance into yet another facet of online advertising. Google’s dominance in search and search advertising is well-known. But affiliate marketing is a rare area where, until recently, Google had little or no presence. By leveraging its dominance in search to take over yet another type of online advertising, Google will importantly limit advertisers’ options. Today, advertisers unhappy with Google’s AdWords prices or rules can consider working with independent web sites through affiliate programs not operated by Google. But if Google comes to dominate affiliate marketing, then even affiliate marketing will become unavailable to advertisers dissatisfied with Google. Indeed, knowing that it dominates multiple aspects of online advertising, Google will be in a position to raise prices that much further.

Second, GAN’s tying harms those AdWords advertisers who refuse GAN and buy only pay-per-click ads from Google. The more GAN ads Google puts above ordinary AdWords listings, the less visible AdWords advertisers become. AdWords advertisers are at a further disadvantage when Google gives image ads to GAN advertisers but not AdWords advertisers, and when Google offers preferred terms (e.g. refunds of advertising costs if a user returns a product) to GAN advertisers but not AdWords advertisers. Google promises that “the highest ranked ad is displayed in the most prominent position,” but when Google gives GAN ads the top positions, ordinary AdWords advertisers are left bidding on the leftovers. And as Google makes its left-side listings increasingly visual — inline maps, images, product pictures, video thumbnails, and more — advertisers need images to capture users’ attention. So AdWords-only advertisers, without image-based ads, end up at a significant disadvantage.

Third, for nearly a year Google has offered the Product Listing Ads benefits in “limited beta” available only to “a small number of participants” Google selects. In fact I’ve seen numerous advertisers, large and small, promoted in Product Listing Ads. But it is striking to see Google offer preferred listings only to those advertisers Google chooses to favor. Elsewhere Google argues that its auction-based ad sales are “equitable.” But when Google gives superior placement to its preferred advertisers, for nearly a year, Google’s rules seem the opposite of fair.

Finally, GAN’s tying is particularly worrisome in the context of other Google tactics. As detailed in the next section, Google uses and has used bundling and tying to enter and dominate numerous markets. If these tactics continue unchecked, we face a future where Google’s dominance stretches even further.

Google’s Tying Strategy More Broadly

Tying GAN to search is just one example of Google’s oft-repeated tactic of forcing customers who want one Google service to accept additional Google services too. This section presents a series of such examples.

Throughout, these tying examples fit the following form:

A [user type] who wants [desirable Google service] must also accept [unwanted Google service].

I now turn to specifics.

Tying to promote affiliate marketing services: An advertiser who wants top placement in Google search advertisements, image ads, and preferred payment terms must join Google Affiliate Network.

Details: See above.

Tying to promote low-quality syndicated search marketing services: An advertiser who wants placement through high-quality Google Search Network sites must also accept low-quality Google Search Network placements.

Details: Google’s Search Network includes some top-quality publishers such as AOL Search and New York Times. But if an advertiser contracts to advertise through Google Search Network, Google demands permission to also place the advertiser’s ads on whatever other sites Google selects, in whatever quantity Google chooses. Many of these placements are low-quality or worthless, including spyware popups, typosquatting sites, and deceptive toolbars. Many of these placements trick advertisers into believing they are receiving valuable traffic when in fact the traffic consists of users the advertisers had already reached or would receive anyway. Even if an advertiser learns about these problems, the advertiser must continue to pay for this traffic, on pain of losing access to Google’s high-quality search partners.

Tying to promote vertical search: A user who wants Google’s core algorithmic search results must also accept Google’s own vertical search results.

Details: Users relish Google’s highly-regarded algorithmic search results. But a user running search at Google also receives Google’s vertical search services: Whether the user prefers Bing Maps, Google Maps, Mapquest, or Yahoo Maps, Google Search always presents inline maps from Google, and so too for images, local businesses, products, scholarly articles, videos, and more. On one view, these vertical search services are an integral part of Google’s offering, but scores of competing vendors reflect a competing vision of users choosing core algorithmic search separately from vertical search services. By granting its special-purpose search services preferred placement, Google sharply reduces traffic to competing vertical search services.

Tying to promote ancillary mobile services : A mobile phone developer who wants Google’s Android certification and access to Android Market application store must also accept Google’s ancillary services, including geolocation.

Details: In a September 2010 complaint, Skyhook alleges that Google ordered Motorola not to ship a proposed device that would have included both Google Location Service and Skyhook’s XPS service, two distinct methods to determine a user’s geographic location. Skyhook claims that Google grounded its threat in Google’s Android Market application store: If Motorola shipped a device with software Google did not approve, Google would ban users of that device from accessing Android Market or running the apps available there. By requiring that Motorola omit Skyhook’s service in order to give users access to Google Market, Google denied users access to Skyhook.

Policy Prescriptions

Advertisers, consumers, policy-makers and the concerned public should give tying relationships a careful look. In principle, bundling previously-separate offerings can offer useful synergies and efficiencies. But bundling can also let a company expand from strength in one area into dominating numerous additional fields — limiting choice, raising prices, and reducing innovation.

In some instances, it may not be obvious how to separate bundled products. For example, there is currently no single clear mechanism whereby Google search results could embed maps, product feeds, or other structured or interactive information from other search services. Pending a compelling plan to unbundle vertical results from core search, my instinct is to save this problem for later — albeit perhaps requiring disclosure of favored treatment Google gives its own search services, or limiting the permissible extent of such favored treatment.

In other instances, market structure and product design yield a natural vision of products that could be separate, generally are separate, and should rightly remain separate. To my eye, these principles ring particularly true in the separation between search marketing and affiliate marketing. There is no logical reason why GAN advertisers should enjoy the only listings with images. Nor is there any logical reason why all GAN ads should appear above all right-side AdWords ads. When Google grants its GAN advertisers these special benefits, the best conclusion is that Google is using its dominance in search to establish dominance in affiliate marketing — seizing an unearned advantage over competing affiliate marketing services. These exclusionary tactics are unjustified and improper, and they ought not be permitted.

Google’s first step should be to cease tying Google Affiliate Network to preferred search placement, format, and terms: An advertiser seeking to include image ads should not have to sign up with GAN, nor should GAN ads arbitrarily appear above competitors. A recent post at Channel Dollars off-handedly reports that Product Listing Ads “has been taken out” GAN and “is being merged into” AdWords. That’s a fair start. But even temporary ties can impede competition, and Google has delivered these large benefits only to GAN advertisers for some ten months.

Meanwhile, Google’s preferred treatment of selected GAN advertisers foreshadows a worrisome future. If Google can give preferred treatment to advertisers who use GAN, what prevents preferred treatment of advertisers who support Google’s regulatory agenda, and inferior treatment of advertisers who complain to policy-makers? Indeed, I doubt that Google invited to Product Listing Ads any advertisers who have publicly criticized Google’s practices. Google’s ability to distribute valuable but opaque favors to preferred advertisers — and to withhold such favors from anyone Google dislikes — makes Google’s power that much stronger and, to my eye, that much more troubling.

Facebook Leaks Usernames, User IDs, and Personal Details to Advertisers updated May 26, 2010

Browse Facebook, and you wouldn’t expect Facebook’s advertisers to learn who you are. After all, Facebook’s privacy policy and blog posts promise not to share user data with advertisers except when users grant specific permission. For example, on April 6, 2010 Facebook’s Barry Schnitt promised: “We don’t share your information with advertisers unless you tell us to (e.g. to get a sample, hear more, or enter a contest). Any assertion to the contrary is false. Period.”

My findings are exactly the contrary: Merely clicking an advertiser’s ad reveals to the advertiser the user’s Facebook username or user ID. With default privacy settings, the advertiser can then see almost all of a user’s activity on Facebook, including name, photos, friends, and more.

In this article, I show examples of Facebook’s data leaks. I compare these leaks to Facebook’s privacy promises, and I point out that Facebook has been on notice of this problem for at least eight months. I conclude with specific suggestions for Facebook to fix this problem and prevent its reoccurrence.

Details of the Data Leak

Facebook’s data leak is straightforward: Consider a user who clicks a Facebook advertisement while viewing her own Facebook profile, or while viewing a page linked from her profile (e.g. a friend’s profile or a photo). Upon such a click, Facebook provides the advertiser with the user’s Facebook username or user ID.

Facebook leaks usernames and user IDs to advertisers because Facebook embeds usernames and user IDs in URLs which are passed to advertisers through the HTTP Referer header. For example, my Facebook profile URL is http://www.facebook.com/bedelman. Notice my username (yellow).

Of course, it would be incorrect to assume that a person looking at a given profile is in fact the owner of that profile. A request for a given profile might reflect that user looking at her own profile, but it might instead be some other user looking at the user’s profile. However, when a user views her own profile page, Facebook automatically embeds a “profile” tag (green) in the URL:

http://www.facebook.com/bedelman?ref=profile

Furthermore, when a user clicks from her profile page to another page, the resulting URL still bears the user’s own user ID or username, along with the details of the later-requested page. For example, when I view a friend’s profile, the resulting URL is as shown below. Notice the continued reference to my username (yellow) and the fact that this is indeed my profile (green), along with an appendage naming the user whose page I am now viewing (blue).

http://www.facebook.com/bedelman?ref=profile#!/pacoles

Each of these URLs is passed to advertisers whenever a user clicks an ad on Facebook. For example, when I clicked a Livingsocial ad on my own profile page, Facebook redirected me to the advertiser, yielding the following traffic to the advertiser’s server. Notice the transmission in the Referer header (red) of my username (yellow) and the fact that I was viewing my own profile page (green).

GET /deals/socialads_reflector?do_not_redirect=1&preferred_city=152&ref=AUTO_LOWE_Deals_ 1273608790_uniq_bt1_b100_oci123_gM_a21-99 HTTP/1.1
Accept: */*
Referer: http://www.facebook.com/bedelman?ref=profile

Host: livingsocial.com

The same transmission occurs when a user clicks from her profile page to a friend’s page. For example, I clicked through to a friend’s profile, http://www.facebook.com/bedelman?ref=profile#!/pacoles, where I clicked another Livingsocial ad. Again, Facebook’s redirect caused my browser to transmit in its Referer header (red) my username (yellow), the fact that that username reflects my personal profile (green). Interestingly, my friend’s username was omitted from the transmission because it occurred after a pound sign, causing it to be automatically removed from Referer transmission.

GET /deals/socialads_reflector?do_not_redirect=1&preferred_city=152&ref=AUTO_LOWE_Deals_ 1273608790_uniq_bt1_b100_oci123_gM_a21-99 HTTP/1.1
Accept: */*
Referer: http://www.facebook.com/bedelman?ref=profile

Host: livingsocial.com

In further testing, I confirmed that the same transmission occurs when a user clicks from her profile page to a photo page, or to any of various other pages linked form a user’s profile.

With a Facebook member’s username or user ID, current Facebook defaults allow an advertiser (and anyone else) to obtain a user’s name, gender, other profile data, picture, friends, networks, wall posts, photos, and likes. Furthermore, the advertiser already knows the user’s basic demographics, since the advertiser knows the user fits the profile the advertiser had requested from Facebook. For example, in grey highlighting above, the advertiser learned from Facebook my age, gender, and geographic location.

Facebook’s Contrary Statements about User Privacy vis-a-vis Advertisers

Facebook has made specific promises as to what information it will share with advertisers. For one, Facebook’s privacy policy promises “we do not share your information with advertisers without your consent” (section 5). Then, in section 7, Facebook lists eleven specific circumstances in which it may share information with others — but none of these circumstances applies to the transmission detailed above.

Facebook’s recent blog postings also deny that Facebook shares users’ identities with advertisers. In an April 6, 2010 post, Facebook promised: “We don’t share your information with advertisers unless you tell us to (e.g. to get a sample, hear more, or enter a contest). Any assertion to the contrary is false. Period.” Facebook’s prior postings were similar. July 1, 2009: “Facebook does not share personal information with advertisers except under the direction and control of a user. … You can feel confident that Facebook will not share your personal information with advertisers unless and until you want to share that information.” December 9, 2009: “Facebook never shares personal information with advertisers except under your direction and control.” As to all these claims, I disagree. Sharing a username or user ID upon a single click, without any disclosure or indication that such information will be shared, is not at a user’s direction and control.

Facebook Has Been on Notice of This Problem for Eight Months

AT&T Labs researcher Balachander Krishnamurthy and Worcester Polytechnic Instituteprofessor Craig Wills previously identified the general problem of social networks leaking user information to advertisers, including leakage through the Referer headers detailed above. In August 2009, their On the Leakage of Personally Identifiable Information Via Online Social Networks was posted to the web and presented at the Workshop on Online Social Networks (WOSN).

Through Krishnamurthy and Wills’ research, Facebook eight months ago received actual notice of the data leakage at issue. A September 2009 MediaPost article confirms Facebook’s knowledge through it spokesperson’s response. However, Facebook spokesperson Simon Axten severely understated the severity of the data leak: Axten commented “The average Facebook user views a number of different profile pages over the course of a session …. It’s thus difficult for a tracking website to know whether the identifier belongs to the person being tracked, or whether it instead belongs to a friend or someone else whose profile that person is viewing.” I emphatically disagree. As shown above, when a user views her own profile, or a page linked from her own profile, the “?ref=profile” tag is added to the URL — exactly confirming the identity of the profile owner.

What Facebook Should Do

Since receiving actual notice of these data leaks, Facebook has implemented scores of new features for advertising, monetization, information-sharing, and reorganization. Inexplicably, Facebook has failed to address leakage of user information to advertisers. That’s ill-advised and short-sighted: Users don’t expect ad clicks to reveal their names and details, and Facebook’s privacy policy and blog posts promise to honor that expectation. So Facebook needs to adjust its actual practices to meet its promises.

Preventing advertisers from receiving usernames and user IDs is strikingly straightforward: A modified redirect can mask referring URLs. Currently, Facebook uses a simple HTTP 301 redirect, which preserves referring URLs — exactly creating the problem detailed above. But a FORM POST redirect, META REFRESH redirect, or JavaScript redirect could conceal referring URLs — preventing advertisers from receiving username or user ID information.

Instead, Facebook has partially implemented the pound sign method described above — putting some, but not all, sensitive information after a pound sign, with the result that sometimes this information is not transmitted as a Referer. If fully implemented across the Facebook site, this approach might prevent the data leakage I uncovered. However, in my testing, numerous within-Facebook links bypass the pound sign masking. In any event, an improved redirect would be much simpler to implement — requiring only a single adjustment to the ad click-redirect script, rather than requiring changes to URL formats across the Facebook site.

Finally, Facebook should inform users of what has occurred. Facebook should apologize to users, explain why it didn’t live up to its explicit privacy commitments, and establish procedures — at least robust testing, if not full external review — to assure that users’ privacy is correctly protected in the future.

Update – May 26, 2010

On May 20, 2010, the Wall Street Journal reported the problem detailed above. On or about that same day, Facebook removed the ref=profile tags that were the crux of the data leak.

I yesterday spoke with Arturo Bejar, a Facebook engineer who investigated this problem. Arturo told me that after Krishnamurthy and Wills’ article, he reviewed relevant Facebook systems in search of leakage of user information. At that time, he found none, in that Facebook revealed the URLs users were browsing when they clicked ads, but did not indicate whether the user clicking a given ad was in fact the owner of the profile showing that ad. However, in a subsequent Facebook redesign, beginning in February 2010, Facebook user home pages received a new “profile” button which carried the ref=profile URL tags I analyze above. Because this tag was added without a further privacy review, Arturo tells me that he and others at Facebook did not consider the interaction between this tag and the problem I describe above. Arturo says that’s why this problem occurred despite the prior Krishnamurthy and Wills article.

Arturo also pointed out that the problem I describe did not affect advertisers whose landing pages were pages on Facebook (rather than advertisers’ own external sites).

Meanwhile, Facebook’s May 24 “Protecting Privacy with Referrers” presents Facebook’s view of the problem in greater detail. Facebook’s posting offers a fine analysis of the various methods of redirects and Facebook’s choice among them. It’s worth a read.

After discussing the problem with Arturo and reading Facebook’s new post, I reached a more favorable impression of Facebook’s response. But my view is tempered by Facebook’s ill-advised attempts to downplay the breach.

  • Rather than affirmatively describing the specific design flaw, Facebook’s post describes what “could” “potentially” occur. Facebook’s post never gives a clear affirmative statement of the problem.
  • Facebook says advertisers would need to “infer” a user’s username/ID. But usernames and IDs are sent directly, in clear and unambiguous URLs, hardly requiring complex analysis
  • Facebook claims that the breach affected only “one case … if a user takes a specific route on the site” (WSJ quote). Facebook also calls the problem “a rarely occurring case” (posting). I dispute these characterizations. It is hardly “rare” for a user to view her own profile. To view her own profile and click an ad? There’s no reason to think that’s any less frequent than clicking an ad elsewhere. To view her own profile, click through to another page, and then click an ad? That’s perfectly standard. Furthermore, although Facebook told the Journal there is “one case” in which data is leaked improperly, in fact I’ve found many such cases including clicking from profile to ad, from profile to friend’s page to ad, and from profile to photo page to ad, to name three.
  • Through transmission in HTTP Referer headers, usernames and IDs appears reach advertisers’ web servers in a manner such that default server log files would store this data indefinitely, and default analytics would tabulate it accordingly. Facebook says it has “no reason to believe that any advertisers were exploiting” the data breach I reported, but the fact is, this data ends up in a place where advertisers could (and, as to historic data, still can) access it easily, using standard tools, and at their convenience.
  • Although Facebook’s post says the problem is “potential,” I found that a user’s username/ID is sent with each and every click in the affected circumstances.

So the problem was substantial, real, and immediate. Facebook errs in suggesting the contrary.

Google Inc. (teaching materials) with Thomas Eisenmann

Edelman, Benjamin, and Thomas R. Eisenmann. “Google Inc.” Harvard Business School Case 910-036, January 2010. (Revised April 2011.) (Winner of ECCH 2011 Award for Outstanding Contribution to the Case Method – Strategy and General Management.) (educator access at HBP.)

Describes Google’s history, business model, governance structure, corporate culture, and processes for managing innovation. Reviews Google’s recent strategic initiatives and the threats they pose to Yahoo, Microsoft, and others. Asks what Google should do next. One option is to stay focused on the company’s core competence, i.e., developing superior search solutions and monetizing them through targeted advertising. Another option is to branch into new arenas, for example, build Google into a portal like Yahoo or MSN; extend Google’s role in e-commerce beyond search, to encompass a more active role as an intermediary (like eBay) facilitating transactions; or challenge Microsoft’s position on the PC desktop by developing software to compete with Office and Windows.

Supplements:

Google Inc. (Abridged) – Case (HBP 910032)

Teaching Materials:

Google inc. and Google Inc. (Abridged) – Teaching Note (HBP 910050)

Google Toolbar Tracks Browsing Even After Users Choose "Disable"

Disclosure: I serve as co-counsel in unrelated litigation against Google, Vulcan Golf et al. v. Google et al. I also serve as a consultant to various companies that compete with Google. But I write on my own — not at the suggestion or request of any client, without approval or payment from any client.

Run the Google Toolbar, and it’s strikingly easy to activate “Enhanced Features” — transmitting to Google the full URL of every page-view, including searches at competing search engines. Some critics find this a significant privacy intrusion (1, 2, 3). But in my testing, even Google’s bundled toolbar installations provides some modicum of notice before installing. And users who want to disable such transmissions can always turn them off – or so I thought until I recently retested.

In this article, I provide evidence calling into question the ability of users to disable Google Toolbar transmissions. I begin by reviewing the contents of Google’s "Enhanced Features" transmissions. I then offer screenshot and video proof showing that even when users specifically instruct that the Google Toolbar be “disable[d]”, and even when the Google Toolbar seems to be disabled (e.g., because it disappears from view), Google Toolbar continues tracking users’ browsing. I then revisit how Google Toolbar’s Enhanced Features get turned on in the first place – noting the striking ease of activating Enhanced Features, and the remarkable absence of a button or option to disable Enhanced Features once they are turned on. I criticize the fact that Google’s disclosures have worsened over time, and I conclude by identifying changes necessary to fulfill users’ expectations and protect users’ privacy.

"Enhanced Features" Transmissions Track Page-Views and Search Terms

Certain Google Toolbar features require transmitting to Google servers the full URLs users browse. For example, to tell users the PageRank of the pages they browse, the Google Toolbar must send Google servers the URL of each such page. Google Toolbar’s “Related Sites” and “Sidewiki” (user comments) features also require similar transmissions.

With a network monitor, I confirmed that these transmissions include the full URLs users visit – including domain names, directories, filenames, URL parameters, and search terms. For example, I observed the transmission below when I searched Yahoo (green highlighting) for "laptops" (yellow).

GET /search?client=navclient-auto&swwk=358&iqrn=zuk&orig=0gs08&ie=UTF-8&oe=UTF-8&querytime=kV&querytimec=kV &features=Rank:SW:&q=info:http%3a%2f%2frds.yahoo.com%2f_ylt%3dA0oGkl32p1tLT2EB8ohXNyoA%2fSIG%3d18045klhr%2f
EXP%3d1264384374%2f**http%253a%2f%2fsearch.yahoo.com%2fsearch%253fp%3dlaptops%2526fr%3dsfp%2526xargs%3d12KP
jg1itSroGmmvmnEOOIMLrcmUsOkZ7Fo5h7DOV5CtdY6hNdE%25252DIfXpP0xZg6WO8T7xvSy7HBreVFdJGu277WVk0qfeG%25255FGOW%2
5255F772GnNVme5ujWkF3s%25252DJ%25255F0%25252Dmdn4RvDE8%25252E%2526pstart%3d7%2526b%3d11&googleip=O;72.14.20
4.104;226&ch=711984234986 HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; GoogleToolbar 6.4.1208.1530; Windows XP 5.1; MSIE 8.0.6001.18702)
Accept-Language: en
Host: toolbarqueries.google.com
Connection: Keep-Alive
Cache-Control: no-cache
Cookie: PREF=ID=…

HTTP/1.1 200 OK …

 

Screenshots – Google Toolbar Continues Tracking Browsing Even When Users "Disable" the Toolbar via Its "X" Button

Consistent with modern browser plug-in standards, the current Google Toolbar features an “X” button to disable the toolbar:

Google Toolbar features an

I clicked the “X” and received a confirmation window:

I chose the top option and pressed OK. The Google Toolbar disappeared from view, indicating that it was disabled for this window, just as I had requested. Within the same window, I requested the Whitehouse.gov site:

Google Toolbar disappeared from view, as instructed.  Google Toolbar seems to be disabled.

Although I had asked that the Google Toolbar be "disable[d] … for this window " and although the Google Toolbar disappeared from view, my network monitor revealed that Google Toolbar continued to transmit my browsing to its toolbarqueries.google.com server:

See also a screen-capture video memorializing these transmissions.

These Nonconsensual Transmissions Affect Important, Routine Scenarios (added 1/26/10, 12:15pm)

In a statement to Search Engine Land, Google argued that the problems I reported are "only an issue until a user restarts the browser." I emphatically disagree.

Consider the nonconsensual transmission detailed in the preceding section: A user presses "x", is asked "When do you want to disable Google Toolbar and its features?", and chooses the default option, to "Disable Google Toolbar only for this window." The entire purpose of this option is to take effect immediately. Indeed, it would be nonsense for this option to take effect only upon a browser restart: Once the user restarts the browser, the "for this window" disabling is supposed to end, and transmissions are supposed to resume. So Google Toolbar transmits web browsing before the restart, and after the restart too. I stand by my conclusion: The "Disable Google Toolbar only for this window" option doesn’t work at all: It does not actually disable Google Toolbar for the specified window, not immediately and not upon a restart.

Crucially, these nonconsensual transmissions target users who are specifically concerned about privacy. A user who requests that Google Toolbar be disabled for the current window is exactly seeking to do something sensitive, confidential, or embarrassing, or in any event something he does not wish to record in Google’s logs. This privacy-conscious user deserves extra privacy protection. Yet Google nonetheless records his browsing. Google fails this user — specifically and unambiguously promising to immediately stop tracking when the user so instructs, but in fact continuing tracking unabated.

Google Toolbar Continues Tracking Browsing Even When Users "Disable" the Toolbar via "Manage Add-Ons"

Internet Explorer 8 includes a Manage Add-Ons screen to disable unwanted add-ons. On a PC with Google Toolbar, I activated Manage Add-Ons, clicked the Google Toolbar entry, and chose Disable. I accepted all defaults and closed the dialog box.

Again I requested the whitehouse.gov site. Again my network monitor revealed that Google Toolbar continued to transmit my browsing to its toolbarqueries.google.com server. Indeed, as I requested various pages on the whitehouse.gov site, Google Toolbar transmitted the full URLs of those pages, as shown in the second and third circles below.

See also a screen-capture video memorializing these transmissions.

In a further test, performed January 23, I reconfirmed the observations detailed here. In that test, I demonstrated that even checking the "Google Toolbar Notifier BHO" box and disabling that component does not impede these Google Toolbar transmissions. I also specififically confirmed that these continuing Google Toolbar transmissions track users’ searches at competing search engines. See screen-capture video.

In my tests, in this Manage Add-Ons scenario, Google Toolbar transmissions cease upon the next browser restart. But no on-screen message alerts the user to the need for a browser-restart for changes to take effect, so the user has no reason to think a restart is required.

Google Toolbar Continues Tracking Browsing When Users "Disable" the Toolbar via Right Click (added 1/26/10 11:00pm)

Danny Sullivan asked me whether Google Toolbar continues Enhanced Features transmissions if users hide Google Toolbar via IE’s right-click menu. In a further test, I confirmed that Google Toolbar transmissions continue in these circumstances. Below are the four key screenshots: 1) I right-click in empty toolbar space, and I uncheck Google Toolbar. 2) I check the final checkbox and choose Disable. 3) Google Toolbar disappears from view and appears to be disabled. I browse the web. 4) I confirm that Google Toolbar’s transmissions nonetheless continue. See also a screen-capture video.

I right-click in empty toolbar space, and I uncheck Google Toolbar I check the final checkbox and choose Disable

Google Toolbar Disappeared from View and Seems to Be Disabled

I confirm that Google Toolbar's transmissions nonetheless continue.

Users May Easily or Accidentally Activate “Enhanced Features” Transmissions

Google Toolbar invites users to activate Enhanced Features with a single click, the default.  Also, notice self-contradictory statements (transmitting 'site' names versus full 'URL' adresses).Google Toolbar invites users to activate Enhanced Features with a single click, the default. Also, notice self-contradictory statements (transmitting ‘site’ names versus full ‘URL’ adresses).

The above-described transmissions occur only if a user runs Google Toolbar in its “Enhanced Features” mode. But it is strikingly easy for a user to stumble into this mode.

For one, the standard Google Toolbar installation encourages users to activate Enhanced Features via a “bubble” message shown at the conclusion of installation. See the screenshot at right. This bubble presents a forceful request for users to activate Sidewiki: The feature is described as “enhanced” and “helpful”, and Google chooses to tout it with a prominence that indicates Google views the feature as important. Moreover, the accept button features bold type plus a jumbo size (more than twice as large as the button to decline). And the accept button has the focus – so merely pressing Space or Enter (easy to do accidentally) serves to activate Enhanced Features without any further confirmation.

I credit that the bubble mentions the important privacy consequence of enabling Enhanced Features: “For enhanced features to work, Toolbar has to tell us what site you’re visiting by sending Google the URL.” But this disclosure falls importantly short. For one, Enhanced Features transmits not just “sites” but specific full URLs, including directories, filenames, URL parameters, and search keywords. Indeed, Google’s bubble statement is internally-inconsistent – indicating transmissions of “sites” and “URLs” as if those are the same thing, when in fact the latter is far more intrusive than the former, and the latter is accurate.

The bubble also falls short in its presentation of Google Toolbar’s Privacy Policy. If a user clicks the Privacy Policy hyperlink, the user receives the image shown in the left image below. Notice that the Privacy Policy loads in an unusual window with no browser chrome – no Edit-Find option to let a user search for words of particular interest, no Edit-Select All and Edit-Copy option to let a user copy text to another program for further review, no Save or Print options to let a user preserve the file. Had Google used a standard browser window, all these features would have been available, but by designing this nonstandard window, Google creates all these limitations. The substance of the document is also inapt. For one, “Enhanced Toolbar Features” receive no mention whatsoever until the fifth on-screen page (right image below). Even there, the first bullet describes transmission of “the addresses [of] the sites” users visit – again falsely indicating transmission of mere domain names, not full URLs. The second bullet mentions transmission of “URL[s]” but says such transmission occurs “[w]hen you use Sidewiki to write, edit, or rate an entry.” Taken together, these two bullets falsely indicate that URLs are transmitted only when users interact with Sidewiki, and that only sites are transmitted otherwise, when in fact URLs are transmitted whenever Enhanced Features are turned on.

Clicking the 'Privacy Policy' link yields this display. Note the lack of browser chrome -- no options to search text, copy to the clipboard, save, or print. Note also the absence of any on-screen mention of the special privacy concerns presented by Enhanced Features.

Clicking the ‘Privacy Policy’ link yields this display. Note the lack of browser chrome — no options to search text, copy to the clipboard, save, or print. Note also the absence of any on-screen mention of the special privacy concerns presented by Enhanced Features.
The first discussion of Enhanced Features appears five pages down.  Furthermore, the text falsely indicates that transmission covers mere domain names, not full URLs.

The first discussion of Enhanced Features appears five pages down. Furthermore, the text falsely indicates that ordinary transmission covers mere domain names, not full URLs. The text says "URL[s]" are transmitted "when you use Sidewiki" and indicates that URLs are not transmitted otherwise.

Certain bundled installations make it even easier for users to get Google Toolbar unrequested, and to end up with Enhanced Features too. With dozens of Google Toolbar partners, using varying installation tactics, a full review of their practices is beyond the scope of this article. But they provide significant additional cause for concern.

Enhanced Features: Easy to Enable, Hard to Turn Off

Google Toolbar's Options screen shows no obvious way to disable Enhanced Features.The preceding sections shows that users can enable Google Toolbar’s Enhanced Features with a single click on a prominent, oversized button. But disabling Enhanced Features is much harder.

Consider a user who changes her mind about Enhanced Features but wishes to keep Google Toolbar in its basic mode. How exactly can she do so? Browsing the Google Toolbar’s entire Options screen, I found no option to disable Enhanced Features. Indeed, Enhanced Features are easily enabled with a single click, during installation (as shown above) or thereafter. But disabling Enhanced Features seems to require uninstalling Google Toolbar altogether, and in any event disabling Enhanced Features certainly lacks any comparably-quick command.

I’m reminded of The Eagles’ Hotel California: "you can check out anytime you like, but you can never leave." And of course, as discussed above, a user who chooses the X button or Manage Add-Ons, will naturally believe the Google Toolbar is disabled, when in fact it continues transmissions unabated.

Google Toolbar Disclosures Have Worsened Over Time

Google Toolbar’s historic installer provided superior notice

I first wrote about Google Toolbar’s installation and privacy disclosures in my March 2004 FTC comments on spyware and adware. In that document, I praised Google’s then-current toolbar installation sequence, which featured the impressive screen shown at right.

I praised this screen with the following discussion:

I consider this disclosure particularly laudable because it features the following characteristics: It discusses privacy concerns on a screen dedicated to this topic, separate from unrelated information and separate from information that may be of lesser concern to users. It uses color and layout to signal the importance of the information presented. It uses plain language, simple sentences, and brief paragraphs. It offers the user an opportunity to opt out of the transmission of sensitive information, without losing any more functionality than necessary (given design constraints), and without suffering penalties of any kind (e.g. forfeiture of use of some unrelated software). As a result of these characteristics, users viewing this screen have the opportunity to make a meaningful, informed choice as to whether or not to enable the Enhanced Features of the Google Toolbar.

I stand by that praise. But six years later, Google Toolbar’s installation sequence is inferior in every way:

  • Now, initial Enhanced Features privacy disclosures appear not in their own screen, but in a bubble pitching another feature (Sidewiki). Previously, format (all-caps, top-of-page), color (red) and language ("… not the usual yada yada") alerted users to the seriousness of the decision at hand.
  • Now, Google presents Enhanced Features as a default with an oversized button, bold type, and acceptance via a single keystroke. Previously, neither option was a default, and both options were presented with equal prominence.
  • Now, privacy statements are imprecise and internally-inconsistent, muddling the concepts of site and URL. Previous disclosures were clear in explaining that acceptance entails "sending us [Google] the URL" of each page a user visits.
  • The current feature name, "Enhanced Features," is less forthright than the prior "Advanced Features" label. The name "Advanced Features" appropriately indicated that the feature is not appropriate for all users (but is intended for, e.g., "advanced" users). In contrast, the current "Enhanced Features" name suggests that the feature is an "enhancement" suitable for everyone.

Google’s Undisclosed Taskbar Button

This 'Google' button was added to my Taskbar without any notice or consent whatsoever -- highly unusual for a toolbar or any other software download.Google’s installer added this ‘Google’ button to my Taskbar without notice or consent.

The Google Toolbar also added a “Google” button to my Taskbar, immediately adjacent to the Start button. The Toolbar installer added this button without any disclosure whatsoever in the installation sequence – not on the toolbar.google.com web page, not in the installer EXE, not anywhere else.

An in-Taskbar button is not consistent with ordinary functions users reasonably expect when they seek and install a “toolbar.” Because this function arrives on a user’s computer without notice and without consent, it is an improper intrusion.

What Google Should Do

Google’s first step is simple: Fix the Toolbar so that X and Manage Add-Ons in fact do what they promise. When a user disables Google Toolbar, all Enhanced Features transmissions need to stop, immediately and without exception. This change must be deployed to all Google Toolbar users straightaway.

Google also needs to clean up the results of its nonconsensual data collection. In particular, Google has collected browsing data from users who specifically declined to allow such data to be collected. In some instances this data may be private, sensitive, or embarrassing: Savvy users would naturally choose to disable Google Toolbar before their most sensitive searches. Google ordinarily doesn’t let users delete their records as stored on Google servers. But these records never should have been sent to Google in the first place. So Google should find a way to let concerned users request that Google fully and irreversibly delete their entire Toolbar histories.

Even when Google fixes these nonconsensual transmissions, larger problems remain. The current Toolbar installation sequence suffers inconsistent statements of privacy consequences, with poor presentation of the full Toolbar Privacy Statement. Toolbar adds a button to users’ Taskbar unrequested. And as my videos show, once Google puts its code on a user’s computer, there’s nothing to stop Google from tracking users even after users specifically decline. I’ve run Google Toolbar for nearly a decade, but this week I uninstalled Google Toolbar from all my PCs. I encourage others to do the same.

Google Click Fraud Inflates Conversion Rates and Tricks Advertisers into Overpaying

I’ve repeatedly reported improper placements of Google ads. In most of my write-ups, the impropriety occurs in ad placement — Google PPC ads shown in spyware popups (1, 2, 3, 4), in typosquatting sites (1, 2), or in improperly-installed and/or deceptive toolbars (1, 2). This article is different: Here, the impropriety includes a fake click — click fraud — charging an advertiser for a PPC click, when in fact the user never actually clicked.

But this is no ordinary click fraud. Here, spyware on a user’s PC monitors the user’s browsing to determine the user’s likely purchase intent. Then the spyware fakes a click on a Google PPC ad promoting the exact merchant the user was already visiting. If the user proceeds to make a purchase — reasonably likely for a user already intentionally requesting the merchant’s site — the merchant will naturally credit Google for the sale. Furthermore, a standard ad optimization strategy will lead the merchant to increase its Google PPC bid for this keyword on the reasonable (albeit mistaken) view that Google is successfully finding new customers. But in fact Google and its partners are merely taking credit for customers the merchant had already reached by other methods.

In this piece, I show the details of the spyware that tracks user browsing and fakes Google PPC ad clicks, and I identify the numerous intermediaries that perpetrate these improper charges. I then criticize Google’s decision to continue placing ads through InfoSpace, the traffic broker that connected Google to this click fraud chain. I consider this practice in light of Google’s advice to advertisers and favored arguments that click fraud problems are small and manageable. Finally, I propose specific actions Google should take to satisfy to prevent these scams and to satisfy Google’s obligations to advertisers.

Introducing the Problem: A Reader’s Analogy

Reading a prior article on my site, a Register discussion forum participant offered a useful analogy:

Let’s say a restaurant decides [it] wants someone to hand out fliers … so they offer this guy $0.10 a flier to print some and distribute them.

The guy they hire just stands at the front door and hand the fliers to anyone already walking through the door.

Restaurant pays lots of money and gains zero customers.

Guy handing out the fliers tells the owner how many fliers were printed and compares that to how many people bring the fliers into his restaurant.

The owner thinks the fliers are very successful and now offers $0.20 for each one.

It’s easy to see how the restaurant owner could be tricked. Such scams are especially easy in online advertising — where distance, undisclosed partnerships, and general opacity make it far harder for advertisers to figure out where and how Google and its partners present advertisers’ offers.

Google and Its Partners Covering Advertisers’ Sites with Spyware-Delivered Click-Fraud Popups

PPC advertisers (e.g. Finish Line)
money viewers
   Google   
money viewers
InfoSpace
money viewers
Cheapstuff
money viewers
Adfirmative
money viewers
dSide Marketing
money viewers
Netaxle
money viewers
eWoss
money viewers
AdOn Network
money viewers
Trafficsolar

The money trail – how funds flow from advertisers to Google to Trafficsolar spyware.

In testing of December 31, 2009, my Automatic Spyware Advertising Tester browsed Finishline.com, a popular online shoe store, on a virtual computer infected with Trafficsolar spyware (among other advertising software, all installed through security exploits without user consent). Trafficsolar opened a full-screen unlabeled popup, which ultimately redirected back to Finish Line via a fake Google PPC click (i.e., click fraud).

My AutoTester preserved screenshots, video, and packet log of this occurrence. The full sequence of redirects:

Trafficsolar opens a full-screen popup window loading from urtbk.com, a redirect server for AdOn Network. (AdOn, of Tempe, Arizona, first caught my eye when it boasted of relationships with 180solutions/Zango and Direct Revenue. NYAG documents later revealed that AdOn distributed more than 130,000 copies of Direct Revenue spyware. More recently, I’ve repeatedly reported AdOn facilitating affiliate fraud, inflating sites’ traffic stats, and showing unrequested sexually-explicit images.)

AdOn redirects to eWoss. (eWoss, of Overland Park, Kansas, has appeared in scores of spyware popups recorded by my testing systems.)

eWoss redirects to Netaxle. (NetAxle, of Prairie Village, Kansas, has also appeared in numerous popups — typically, as here, brokering traffic from eWoss.)

Netaxle redirects to dSide Marketing. (dSide Marketing, of Montreal, Canada, says it provides full-service SEO and SEM services.)

dSide Marketing redirects to Adfirmative. (Adfirmative, of Austin, Texas, promises “click-fraud protected, targeted advertising” and “advanced click-fraud prevention.”)

Adfirmative redirects to Cheapstuff. (Cheapstuff fails to provide an address on its web site or in Whois, though its posted phone number is in Santa Monica, California. Cheapstuff’s web site shows a variety of commercial offers with a large number of advertisements.)

Cheapstuff redirects to InfoSpace. (InfoSpace, of Bellevue, Washington, is discussed further in the next section.)

InfoSpace redirects to Google, which redirects through DoubleClick and onwards back to Finish Line — the same site my tester had been browsing in the first place.

This placement is a bad deal for Finish Line for at least two reasons. First, Google charges Finish Line a fee to access a user already at Finish Line’s site. But that’s more of a shake-down then genuine advertising: an advertiser should not have to pay to reach a user already at its site. Furthermore, Google styles its advertising as “pay per click”, promising advertisers that “You’re charged only if someone clicks your ad.” But here, the video and packet log clearly confirm that the Google click link was invoked without a user even seeing a Google ad link, not to mention clicking it. Advertisers paying high Google prices deserve high-quality ad placements, not spyware popups and click fraud.

Finally, the popup lacks the labeling specifically required by FTC precedent. Consistent with FTC’s settlement in its Direct Revenue and Zango cases, every spyware/adware popup must be labeled with the name of the program that caused the popup, along with uninstall instructions. Furthermore, the FTC has taken an appropriately dim view of advertising software installed on users’ computers without user consent. But every single Trafficsolar installation I’ve ever seen has arrived on my test computers through security exploits, without consent. For these reasons, this Trafficsolar-Google popup clearly falls afoul of applicable FTC requirements.

Critiquing InfoSpace’s role

As shown in the prior section and diagram, traffic flows through a remarkable seven intermediaries en route from Trafficsolar spyware to the victim Google advertiser. Looking at such a lengthy chain, the problem may seem intractable: How could Google effectively supervise a partner’s partner’s partner’s partner’s partner’s partner’s partner’s partner? That insurmountable challenge is exactly why Google should never have gone down this path. Instead, Google should place ads only through the companies with which Google has direct relationships.

In this instance, when traffic finally gets to Google, it comes through a predictable source: InfoSpace. It was InfoSpace, and InfoSpace alone, that distributed Google ads into the morass of subsyndicators and redistributors detailed above.

Flipping through my records of prior InfoSpace observations, I was struck by the half-decade of bad behavior. Consider:

June 2005: I showed InfoSpace placing Google ads into the IBIS Toolbar which, I demonstrated in multiple screen-capture videos, was arriving on users’ computers through security exploits (without user consent). The packet log revealed that traffic flowed from IBIS directly to InfoSpace’s Go2net.com — suggesting that InfoSpace had a direct relationship with IBIS and paid IBIS directly, not via any intermediary.

August 2005: I showed InfoSpace placing ads through notorious spyware vendor Direct Revenue (covering advertisers’ sites with unlabeled popups presenting their own PPC ads). The packet log revealed that traffic flowed from Direct Revenue directly to InfoSpace — suggesting that InfoSpace had a direct relationship with Direct Revenue and paid Direct Revenue directly, not via any intermediary.

August 2005: I showed InfoSpace placing ads through notorious spyware vendor 180solutions/Zango. The packet log revealed that traffic flowed from 180solutions directly to InfoSpace — suggesting that InfoSpace had a direct relationship with 180solutions and paid 180solutions directly, not via any intermediary.

February 2009: I showed InfoSpace placing Google ads into WhenU popups that covered advertisers’ sites with their own PPC ads.

May 2009: Again, I showed InfoSpace using WhenU to cover advertisers’ sites with their own PPC ads, through partners nearly identical to the February report.

January 2010 (last week): I showed InfoSpace’s still placing Google ads into WhenU popups and still covering advertisers’ sites with their own PPC ads.

And those are just placements I happened to write up on my public site! Combine this pattern of behavior with InfoSpace’s well-documented accounting fraud, and InfoSpace hardly appears a sensible partner for Google and the advertisers who entrust Google to manage their spending.

Nor can InfoSpace defend this placement by claiming Cheapstuff looked like a suitable place to show ads. The Cheapstuff site features no mailing address or indication of the location of corporate headquarters. WHOIS lists a “privacy protection” service in lieu of a street address or genuine email address. These omissions are highly unusual for a legitimate advertising broker. They should have put InfoSpace and Google on notice that Cheapstuff was up to no good.

This Click Fraud Undercuts Google’s Favorite Defense to Click Fraud Complaints

When an advertiser buys a pay-per-click ad and subsequently makes a sale, it’s natural to assume that sale resulted primarily from the PPC vendor’s efforts on the advertiser’s behalf. But the click fraud detailed in this article takes advantage of this assumption by faking clicks to target purchases that would have happened anyway. Then, when advertisers evaluate the PPC traffic they bought, they overvalue this “conversion inflation” traffic — leading advertisers to overbid and overpay.

Indeed, advertisers’ following Google’s own instructions will fall into the overbidding trap. Discussing “traffic quality” (i.e. click fraud and similar schemes),Google tells advertisers to “track campaign performance” for “ROI monitoring.” That is, when an advertiser sees a Google ad click followed by a sale, the advertiser is supposed to conclude that ads are working well and delivering value, and that click fraud is not a problem. Google’s detailed “Click Fraud: Anecdotes from the Front Line” features a similar approach, advising that “ROI is king,” again assuming that clicks that precede purchases must be valuable clicks.

Google’s advice reflects an overly optimistic view of click fraud. Google assumes click fraudsters will send random, untargeted traffic. But click-frauders can monitoring user activities to identify the user’s likely future purchases, just as Trafficsolar does in this example. Such a fraudster can fake the right PPC clicks to get credit for traffic that appears to be legitimate and valuable — even though in fact the traffic is just as worthless as other click fraud.

What Google Should Do

Google’s best first step remains as in my posting last week: Fire InfoSpace. Google doesn’t need InfoSpace: high-quality partners know to approach Google directly, and Google does not need InfoSpace to add further subpartners of its own.

Google also needs to pay restitution to affected advertisers. Every time Google charges an advertiser for a click that comes from InfoSpace, Google relies on InfoSpace’s promise that the click was legitimate, genuine, and lawfully obtained. But there is ample reason to doubt these promises. Google should refund advertisers for corresponding charges — for all InfoSpace traffic if Google cannot reliably determine which InfoSpace traffic is legitimate. These refunds should apply immediately and across-the-board — not just to advertisers who know how to complain or who manage to assemble exceptional documentation of the infraction.

More generally, Google must live up to the responsibility of spending other people’s money. Through its Search Network, Google takes control of advertisers’ budgets and decides, unilaterally, where to place advertisers’ ads. (Indeed, for Search Network purchases, Google to this day fails to tell advertisers what sites show their ads. Nor does Google allow opt-outs on a site-by-site basis — policies that also ought to change.) Spending others’ money, wisely and responsibly, is a weighty undertaking. Google should approach this task with significantly greater diligence and care than current partnerships indicate. Amending its AdWords Terms and Conditions is a necessary step in this process: Not only should Google do better, but contracts should confirm Google’s obligation to offer refunds when Google falls short.

I’m disappointed by Google’s repeated refusal to take the necessary precautions to prevent these scams. InfoSpace’s shortcomings are well-known, longstanding, and abundantly documented. What will it take get Google to eject InfoSpace and protect its advertisers’ budgets?