Introducing the Automatic Spyware Advertising Tester

I’ve repeatedly shown how spyware programs claim commissions from affiliate merchants. If spyware programs and their affiliates truthfully labeled the resulting traffic as coming from spyware, networks and merchants could reject that traffic — avoiding showing merchants’ sites in unwanted pop-ups, and refusing to pay commissions on any sales that result. But in practice, spyware affiliates’ traffic is not labeled as such, and is therefore hard to separate from legitimate affiliates. With hundreds of different affiliates reselling spyware-originating traffic, even the most determined merchants face difficulty in finding all their bad affiliates.

In How Affiliate Programs Fund Spyware (September 2005), I offered one way merchants and networks can uncover spyware-using affiliates: Hands-on testing. Infect a set of computers (or virtual machines) with spyware, browse the web, and track what happens. If an affiliate is found buying spyware traffic, then punish that affiliate by refusing to pay it commissions it purportedly “earned,” or even by demanding repayment of prior-period commissions.

For more than three years, I’ve run extensive hands-on tests of spyware programs, in large part to observe and record what ads were shown. But as I take on new obligations, hands-on testing becomes infeasible.

Earlier this year, I wrote a program I call the “Automatic Spyware Advertising Tester” (“AutoTester”). On a set of virtual machines infected with a variety of spyware, the AutoTester browses a set of test scenarios — viewing web pages, running searches, and even adding items to shopping carts at retailers’ sites. The AutoTester keeps a full log of what happens — including a video of what pop-ups appear, and a packet log of what network transmissions occur. If the AutoTester observes any improper traffic (such as an unexpected and unrequested affiliate link), it records that event in a log file, and it tags the video and packet log accordingly.

The AutoTester has already proven helpful for finding bad affiliates (like the six affiliates I present in today’s Spyware Still Cheating Merchants and Legitimate Affiliates, among dozens of others). But the AutoTester can equally well detect other kinds of advertising fraud. I’ve recently used the AutoTester to record widespread click fraud against “second-tier” PPC vendors, and to monitor the sequences of redirects behind syndicated display advertising. The AutoTester can even test for cookie-stuffing. So it’s a handy addition to my toolkit and an efficient way to reduce time-consuming hands-on tests. Look for more automatically-generated reports in the future.

US patent pending.

How Spyware-Driven Forced Visits Inflate Web Site Traffic Counts

The usual motive for buying spyware popup traffic is simple: Showing ads. Cover Netflix’s site with an ad for Blockbuster, and users may buy from Blockbuster instead. Same for other spyware advertisers.

But there are other plausible reasons to buy spyware traffic. In particular, cheap spyware traffic can be used to inflate a site’s traffic statistics. Buying widespread “forced visits” causes widely-used traffic measurements to overreport a site’s popularity: Traffic measurements mistakenly assume users arrived at the site because they actually wanted to go there, without considering the possibility that the visit was involuntary. Nonetheless, from the site’s perspective, forced visits offer real benefits: Investors will be willing to pay more to buy a site that seems to be more popular, and advertisers may be willing to pay more for their ads to appear. In some sectors, higher reported traffic may create a buzz of supposed popularity — helping to recruit bona fide users in the future.

Yet spyware-originating forced-visit traffic can cause serious harm. Harm may accrue to advertisers — by overcharging them as well as by placing their ads in spyware they seek to avoid. Harm may accrue to investors, by causing them to overpay for sites whose true popularity is less than traffic statistics indicate. In any event, harm accrues to consumers and to the public at large, through funding of spyware that sneaks onto users’ PCs with negative effects on privacy, reliability, and performance.

Others have previously investigated some of these problems. In December 2006, the New York Times reported that Nielsen/NetRatings cut traffic counts for Entrepreneur.com by 65% after uncovering widespread forced site visits. But forced-visit traffic is more widespread than the four specific examples the Times presented.

This article offers six further examples of sites receiving forced visits — including the spyware vendors and ad networks that are involved. The article concludes by analyzing implications — suggested policy responses for advertisers and ad networks, as well as ways of detecting sites receiving forced visits.

Example 1: IE Plugin and Paypopup Promoting Bolt.com

IE Plugin Promoting Bolt.com IE Plugin Promoting Bolt.com

In testing of April 23, I browsed Google and received the popunder shown at right (after activation) and in video. Packet log analysis reveals that traffic flowed as follows: From IE Plugin (purportedly of Belize), to Paypopup (of Ontario, Canada), to Paypopup’s multi-pops.com ad server, to Bolt (of New York). URLs in the sequence:

http://66.98.144.169/redirect/adcycle.cgi?gid=9&type=ssi&id=396
http://paypopup.com/adsDirect.php?cid=1133482&ban=1&id=ieplugin&sid=10794&pub…
http://service.multi-pops.com/adsDirect.php?ban=1&id=ieplugin&cid=1133482&sid…
http://service.multi-pops.com/links.php?data=rSe_2%2F%FE%2F1%285%FE1%2F%2B%24…
http://service.multi-pops.com/linksed.php?sn=851177371957&uip=…&siteid=iepl…
http://www.bolt.com/

As shown in the packet log, this traffic originated with IE Plugin’s Adcycle.cgi ad-loader. This ad-loader sends traffic to a variety of ad networks, as best I can tell without any targeting whatsoever. Users therefore receive numerous untargeted ad windows, typically appearing as popups and popunders.

The resulting Bolt window appears without any attribution or branding indicating what spyware caused it to appear. This lack of labeling makes it particularly hard for users to figure out what program is responsible or to take action to stop further unwanted ads. IE Plugin’s unlabeled ads are particularly harmful because users may not have authorized the installation of IE Plugin in the first place: I have repeatedly seen IE Plugin install without user consent, including via bundles assembled by notorious spyware distributor Dollar Revenue.

The packet log indicates that Bolt purchased traffic not from IE Plugin directly, but rather from Paypopup. But Paypopup’s name and product descriptions specifically indicate the kind of ads that Paypopup sells forced visits — popups that appear without an affirmative end-user choice. The inevitable result of such traffic purchases is to inflate the measured popularity of the beneficiary web sites. So even if Bolt did not know it was buying spyware-originating advertising, Bolt must have known it was receiving forced visit traffic.

The packet log also shows that Paypopup specifically knew it was doing business with IE Plugin. Notice the repeated references to IE Plugin in the Paypopup and Multi-pops ad-loader URLs (“id=ieplugin”).

Bolt’s “About” page includes a claim of “reach[ing] 14.9 million unique visitors each month.” Taking this claim at face value, Bolt’s relationship with Paypopup and IE Plugin begs the question: How many of Bolt’s visitors are forced to see Bolt because spyware took them there, rather than because they affirmatively chose it?

Meanwhile, Bolt boasts top-tier advertisers including Verizon (shown in part in the screenshot above), Coca-Cola, Nike, and Sony. These brand-conscious advertisers are unlikely to want their ads to appear through spyware-delivered popups.

Example 2: Yourenhancement, Adtegrity, Right Media Exchange, and AdOn Network (MyGeek) Promoting PureVideo Networks’ GrindTV

Yourenhancement Promoting GrindTV Yourenhancement Promoting GrindTV

In testing of April 29, I browsed the web and received the full-screen popup shown at right. The popup was so large and so intrusive that it even covered the Start Menu, Taskbar, and System Tray — preventing me from easily switching to another program.

Packet log analysis reveals that traffic flowed as follows: From Yourenhancement (of Los Angeles), to Adtegrity (of Grand Rapids, Michigan), to the Right Media Exchange, to AdOn Network (previously MyGeek/Cpvfeed) (of Phoenix, Arizona) to Grind TV (of El Segundo, California). URLs in the sequence:

http://63.123.224.168/mbop/display.php3?aid=19&uid=…
http://ad.adtegrity.net/imp?z=0&Z=0x0&s=4670&u=http%3A%2F%2F63.123.224.168…
http://ad.yieldmanager.com/imp?z=0&Z=0x0&s=4670&u=http%3A%2F%2F63.123.224….
http://ad.adtegrity.net/iframe3?AAAAAD4SAADV5AMAtnIBAAIADAAAAP8AAAABEwACAA…
http://ad.yieldmanager.com/iframe3?AAAAAD4SAADV5AMAtnIBAAIADAAAAP8AAAABEwA…
http://campaign.cpvfeed.com/cpvcampaign.jsp?p=110459&campaign=Mortgage&aid…
http://www.grindtv.com/p/hs444/mygeek/

Yourenhancement’s display.php3 ad-loader sends traffic to a variety of ad networks, by all indications without any targeting whatsoever. Users therefore receive numerous untargeted popups and popunders. As in the prior example, the resulting window lacks any branding to indicate what spyware caused it to appear or how users can prevent future popups from the same source.

Yourenhancement’s unlabled ads are particularly harmful because users may not have authorized the installation of Yourenhancement in the first place: I have repeatedly seen Yourenhancement install without user consent — including in bundles assembled by DollarRevenue, in WMF exploits served from ExitExchange, in misleading ActiveX bundles packaged by IE Plugin, and in a CoolWebSearch exploit served from Runeguide.

The packet log indicates that GrindTV purchased traffic not from Fullcontext directly, but rather from AdOn Network. However, advertising professionals should know that buying advertising from AdOn Network inevitably means receiving traffic from spyware. For example, Direct Revenue’s site previously disclosed that Direct Revenue shows AdOn ads, while AdOn’s site admitted showing ads through both Direct Revenue (“OfferOptimizer”) and Zango (180solutions). My site has repeatedly covered AdOn’s role in spyware placements (1, 2, 3, 4). I continue to observe traffic flowing directly to MyGeek from various spyware installed without user consent, including Look2me and Targetsaver. With voluminous documentation freely available, advertisers cannot reasonably claim not to know what kind of ads AdOn sells.

The GrindTV site is operated by PureVideo Networks. I have previously seen spyware-originating forced visits to other PureVideo sites, including Stupidvideos.com and Hollywoodupclose.com.

PureVideo’s “News” page specifically touts the company’s reported popularity (“among top 10 US video sites by market share”, “top growing sites”, “StupidVideos Climb Charts”, etc.). In March, ComScore even announced that PureVideo sites were the ninth-fastest growing properties on the web. But in that same month, I observed widespread forced-visit promotion of multiple PureVideo sites. Forced visits can easily cause a dramatic traffic jump — the same occurrence ComScore reported. It’s hard to know whether PureVideo’s forced visits inflated ComScore’s measurements of PureVideo’s popularity, but that seems like a plausible possibility, particularly in light of Nielsen/NetRatings’ 2006 cut of Entrepreneur’s traffic (after Entrepreneur had used similar tactics).

PureVideo’s Investors & Advisors page indicates that PureVideo has received outside investment, including a $5.6 million investment from SoftBank Capital.

Example 3: Yourenhancement, Adtegrity, Right Media Exchange, and AdOn Network (MyGeek) Promoting Broadcaster.com

Yourenhancement Promoting GrindTV Yourenhancement Promoting Broadcaster

In testing of April 29, I browsed the web and received the popup shown at right.

Packet log analysis reveals that traffic flowed as follows: From Yourenhancement (widely installed without consent, as set out above) to Adtegrity, to the Right Media Exchange, to AdOn Network to Broadcaster (of Las Vegas). URLs in the sequence:

http://63.123.224.168/mbop/display.php3?aid=18&uid=…
http://ad.adtegrity.net/imp?z=0&Z=0x0&s=113743&u=http%3A%2F%2F63.123.224.1…
http://ad.yieldmanager.com/imp?z=0&Z=0x0&s=113743&u=http%3A%2F%2F63.123.2….
http://ad.adtegrity.net/iframe3?AAAAAE-8AQBJ6wQARMcBAAIACAAAAP8AAAABEwACAA…
http://ad.yieldmanager.com/iframe3?AAAAAE-8AQBJ6wQARMcBAAIACAAAAP8AAAABEwA…
http://campaign.cpvfeed.com/cpvcampaign.jsp?p=110495&campaign=121kwunique&…
http://url.cpvfeed.com/cpv.jsp?p=110495&aid=501&partnerMin=0.0036&…
http://www.broadcaster.com/tms/video/index.php?show=trated&bcsrtkr=a85d2&u…

As in the preceding example, traffic originated with Yourenhancement’s display.php3 ad-loader, and lacked any branding to indicate its source. The preceding example reports some of the many contexts in which Yourenhancement has become installed on my test PCs without my consent.

The packet log indicates that GrindTV purchased traffic from AdOn. But as the preceding example explains, Broadcaster should reasonably have known that buying traffic from AdOn means receiving forced-visit traffic as well as spyware-originating traffic.

Broadcaster has recently issued press releases to promote its increased traffic (“Broadcaster traffic rankings soar … one of the fastest growing online entertaining communities”; “88% increase in month-over-month website traffic”; “Tremendous audience growth”; etc.). So Broadcaster clearly views its traffic statistics as important. Yet nowhere in Broadcaster’s press releases does Broadcaster mention that its reported visitor counts include visitors who arrived involuntarily.

Broadcaster is a publicly traded company (OTC: BCSR.OB). Broadcaster’s December 2006 SEC 10KSB/A disclosure does briefly discuss Broadcaster’s purchase of “online advertisements … to attract new users” to its service. But the word “advertisements” tends to suggest mere solicitations (e.g. banner ads), not full impressions that cause a loading of Broadcaster’s site (and hence a tick in reported traffic figures). In my review of this and other Broadcaster financial documents, I could find no direct admission that Broadcaster buys cheap forced visits, then counts those involuntary visits towards records of site popularity. It appears that investors may be buying shares in Broadcaster without understanding the true origins of at least some of Broadcaster’s traffic.

This is not Broadcaster’s first run-in with spyware. Broadcaster’s Accessmedia subsidiary was named as a co-defendant in FTC and Washington Attorney General 2006 suits against Movieland et al., alleging that defendants’ software “barrages consumers’ computers with pop-up windows demanding payment to make the pop-ups go away.” According to the FTC’s complaint, Broadcaster’s Accessmedia subsidiary served as the registrant and technical contact for Movieland.com, and also shared telephone numbers and customer service with Movieland.

Example 4: Web Nexus Promoting Orbitz’s Away.com

Web Nexus Promoting Orbitz's Away.com Web Nexus Promoting Orbitz’s Away.com

In testing of April 29, I browsed the web and received the full-screen popup shown at right. As in Example 2, the popup even covered the Start Menu, Taskbar, and System Tray — preventing me from easily switching to another program. Meanwhile, the ad appeared substantially unlabeled — with a small Web Nexus caption at ad bottom, but with the caption’s letters more than half off-screen.

Packet log analysis reveals that traffic flowed as follows: From Web Nexus (purportedly of Bosnia and Herzegovina) directly to Orbitz’s Away.com. URLs in the sequence:

http://stech.web-nexus.net/cp.php?loc=295&cid=9951709&u=bmV0ZmxpeC5jb20v&e…
http://stech.web-nexus.net/sp.php/9905/28779/295/9951709/527/
http://travel.away.com/District-of-Columbia/travel-sc-hotels-1963-District…

The packet log indicates that Away.com received traffic directly from Web Nexus. Web Nexus is well-known to be unwanted advertising software: The first page of Google search results for “Web Nexus” includes five references to spyware, four to adware, one to viruses, and six to user complaints seeking assistance with removal. I have personally observed Web Nexus becoming installed through a WMF exploit and through the DollarRevenue bundler, among other methods.

Orbitz’s Away.com popup provides three distinct business benefits to Orbitz. First, the popup promotes Orbitz’s own services (e.g. its hotel booking services). Second, the popup promotes Orbitz’s advertisers (here, Verizon, despite Verizon’s repeatedlystated policy of not advertising through spyware). Finally, the popup inflates traffic statistics to Away.com — likely increasing advertisers’ future willingness to pay for ads at Away.com.

Example 5: WebBuying and Exit Exchange Promoting Roo TV

WebBuying Promoting Roo TV WebBuying Promoting Roo TV

In testing of April 23, I browsed the web and received the full-screen popup shown at right. As in Example 2 and 4, the popup covered the Start Menu, Taskbar, and System Tray, and lacked readable labeling of its source.

Packet log analysis reveals that traffic flowed as follows: From WebBuying (a newer variant of Web Nexus) to ExitExchange to Roo TV. URLs in the sequence:

http://s.webbuying.net/e/sp.php/5ers7+aSiObv7uvm7e_v6e7o6e3m6erk
http://count.exitexchange.com/exit/1196612
http://ads.exitexchange.com/roo/?url=http://www.rootv.com/?channel=pop&f…
http://www.rootv.com/?channel=pop&fmute=true&bitrate=56

The packet log indicates that Roo TV received traffic directly from Exit Exchange — traffic that Exit Exchange reasonably should have known would include spyware-originating traffic. Exit Exchange widely receives spyware-originating traffic, passing from a variety of spyware to Exit Exchange, and onwards to Exit Exchange’s advertisers. (For example, in June 2006 I showed Exit Exchange receiving traffic from Surf Sidekick spyware, widely installed without consent. Meanwhile, SiteAdvisor rates Exit Exchange red for delivering exploits to users’ PCs — behavior I documented in February 2006 and observed twice last week alone.)

The Roo TV landing page URL leaves no doubt that Roo TV knew it was receiving forced visits. Notice the “channel-pop” tag in the URL log above — specifically conceding that the traffic at issue was not requested by users.

Roo TV’s “About” page reveals Roo’s emphasis on traffic quantity: The page’s first sentence boasts that “Roo is consistently ranked as one of the world’s ten most viewed online video networks.” But, as in the preceding examples, forced visits raise questions about how Roo got so popular. Is Roo a top-ten site in users’ minds, or only a destination users are frequently forced to visit, against their wishes?

Example 6: WebBuying Promoting Diet.com

WebBuying Promoting Diet.com WebBuying Promoting Diet.com

In testing of April 23, WebBuying also served a full-screen popup of Diet.com — again covering the Start Menu, Taskbar, and System Tray, and again lacking readable labeling to disclose its source. Screen-capture video.

Packet log analysis reveals that traffic flowed from WebBuying directly to Diet:

http://c.webbuying.net/e/check.php?cid=13352451&lid=327&cc=US&u=aHR0cDov…
http://s.webbuying.net/e/sp.php/6+rv6uaSiObv7uvm7e_v6e7o6e3m6erk
http://www.diet.com/tracking/index.php?id=1052

As in the Away.com example, Diet.com receives several benefits from this popup: Promoting its own content, showing ads for third parties (here, Nutrisystem), and inflating its traffic statistics.

Alexa’s traffic statistics show a 5x+ jump in Diet traffic in early March — the same period in which I began observing forced visits to Diet.com.

Additional Examples on File

The preceding six examples are only a portion of my recent records of spyware-originating forced-visit I have recently observed. Under euphemisms that range from “audience development” to “push traffic,” these tactics have become widespread and, by all indications, continue to grow. I have seen other popups from each of these sites on numerous other occasions, and I have seen similar popups from other sites delivered via similar methods.

Implications & Policy Responses

Video sites are strikingly prevalent in the preceding examples and in other forced-visit traffic I have observed. Why? Google’s $1.65 billion acquisition of YouTube inspired others hoping to receive even a fraction of YouTube’s valuation. So far no competitor has gained much traction. But the expectation that video sites grow virally creates an incentive to try to jump-start traffic by any means possible — even spyware-originating traffic.

When forced-visit sites show ads, they tend to promote well-known advertisers. For example, two of the preceding examples (1, 4) feature Verizon, despite Verizon’s stated policy against spyware advertising. While concerned advertisers have generally added anti-spyware policies to their ad contracts, they still tend to ignore the problem of web sites buying spyware traffic. Verizon staff will probably take the position that it is not permissible for a Verizon ad to be shown in a site that receives widespread spyware traffic. But then Verizon’s ad contracts and other policy statements probably need to say so. Same for ad networks seeking to avoid reselling spyware inventory. In practice, few ad policies prohibit intermediary sites buying spyware-originating traffic.

Low-cost spyware-originating traffic can vastly increase a site’s reported popularity. Consider Alexa’s plot of Roo TV traffic. During April 2007 (when I first began to observe spyware-originating forced visits of Roo TV), Alexa reports that Roo’s reach and page views both jumped by an order of magnitude. It is difficult to know how much of this jump results from spyware-originating forced-visit traffic — rather than other kinds of forced visits, or conceivably bona fide user interest. But the New York Times piece reported that when ComScore last year adjusted Entrepreneur’s statistics to account for forced visits, traffic was reduced by 65%. A similar reduction may be required for the sites set out above.

When forced-visit sites show banner ads, the sites raise many of the same concerns as banner farms — including overwhelming advertising, unrequested popups, automatic reloads, opaque resale of spyware-originating traffic, and an overall bad value to advertisers. Particularly prominent among spyware-delivered banner farms is India Broadcast Live’s Smashits — which buys widespread spyware-originating forced-visit traffic, and shows as many as six different banner ads in a page that otherwise lacks substantial content. In some instances, Smashits’ page hijacks users’ browsers: Spyware removes the page a user had requested, and instead shows only the Smashits site. (Video example.) These practices may lead concerned advertisers and ad networks to avoid doing business with Smashits, including Smashits’ many alter egos and secondary domain names. But at present, Smashits continues to show ads from top advertisers and ad networks (particularly FastClick, Google, and TribalFusion). Same for other banner farms still in operation.

Detection

Sophisticated advertisers and ad networks rightly want to know which sites are buying spyware-originating forced-click traffic. But they can’t answer that question merely by examining individual sites: Bolt, GrindTV, and kin all look like ordinary sites, without any obvious sign that they get traffic from spyware. So advertisers and networks’ can’t catch spyware-originating traffic. using their usual techniques for evaluating publishers (such as browsing publishers’ sites in search of explicit or offensive materials).

Advertisers and ad networks might look for unusual changes in sites’ reported traffic rank — on the view that extreme spikes probably indicate forced-visit traffic. But there can be legitimate reasons for traffic spikes. Furthermore, an unexpected traffic jump will often prove an insufficient reason to block a prospective advertising relationship. Finally, if advertisers and ad networks distrusted sites with traffic spikes, sites could start their forced-click campaigns more gradually, to avoid tell-tale jumps. So checking for traffic spikes is not a sustainable strategy.

With help from traffic measurement vendors, advertisers and ad networks could attempt to measure visit length rather than visit count. But even visit length measurement might not prevent miscounting of spyware-originating forced visits. Some spyware opens sites off-screen — where JavaScript or other code could extend traffic indefinitely to inflate measured visit length as needed, without users noticing and closing the resulting windows.

The only robust way to detect spyware-originating forced visits is through testing of actual spyware-infected PCs — by watching their behavior and seeing what sites they show. Historically, I’ve done this testing manually, as in the examples set out above. Fortunately, detecting widespread spyware-originating traffic is easy — because, by hypothesis, the traffic is common and hence likely to appear even in brief testing. That said, a scalable automated system might be preferable to my hands-on testing. I’ve recently built an automatic tester that performs this function, among others. I’ll describe it more in a coming piece. US patent pending.

Advertising Through Spyware — After Promising To Stop

On January 29, the New York Attorney General announced an important step in the fight against spyware: Holding advertisers accountable for their payments to spyware vendors. This is a principle I’ve long endorsed — beginning with my 2003 listing of Gator advertisers (then including Apple, Chrysler, and Orbitz), and continuing in my more recent articles about advertising intermediaries funding spyware and specific companies advertising through spyware.

I’m not the only one to applaud this approach. FTC Commissioner Leibowitz recently commended the NYAG’s settlement, explaining that “advertising dollars fuel the demand side of the nuisance adware problem by giving [adware vendors] the incentive to expand their installed base, with or without consumers’ consent.” In a pair of 2006 reports, the Center for Democracy and Technology also investigated spyware advertisers, attempting to expose the web of relationships that fund spyware vendors.

The NYAG’s settlement offers a major step forward in stopping spyware because it marks the first legally binding obligation that certain advertisers keep their ads (and their ad budgets) out of spyware. In Assurances of Discontinuance, Cingular (now part of AT&T), Priceline, and Travelocity each agreed to cease use of spyware. In particular, each company agreed either to stop using spyware advertising, or to use only “adware” that provides appropriate disclosures to users, prominently labels ads, and offers an easy procedure to uninstall. These requirements apply to ads purchased directly by Cingular, Priceline, and Travelocity, as well as to all marketing partners acting on their behalf.

These important promises are the first legally-binding obligations, from any Internet advertisers, to restrict use of spyware. (Compare, e.g., advertisers voluntarily announcing an intention to cease spyware advertising — admirable but not legally binding.) If followed, these promises would keep the Cingular, Priceline, and Travelocity ad budgets away from spyware vendors — reducing the economic incentive to make and distribute spyware.

But despite their duties to the NYAG, both Cingular and Travelocity have failed to sever their ties with spyware vendors. As shown in the six examples below, Cingular and Travelocity continue to receive spyware-originating traffic, including traffic from some of the web’s most notorious and most widespread spyware, in direct violation of their respective Assurances of Discontinuance. That said, Priceline seems to have succeeded in substantially reducing these relationships — suggesting that Cingular and Travelocity could do better if they put forth appropriate effort.

Example 1: Fullcontext, Yieldx (Admedian), Icon Media (Vizi) Injecting Travelocity Ad Into Google

A Travelocity Ad Injected into Google by Fullcontext A Travelocity Ad Injected into Google by Fullcontext

Travelocity
money viewers
   Icon (Vizi Media)    
money viewers
   Yieldx (Ad|Median)    
money viewers
Fullcontext

The Money Trail – How Travelocity Pays Fullcontext

On a PC with Fullcontext spyware installed (controlling server 64.40.99.166), I requested www.google.com. In testing of February 13, I received the image shown in the thumbnail at right — with a large 728×90 pixel banner ad appearing above the Google site. Google does not sell this advertising placement to any advertiser for any price. But Fullcontext spyware placed Travelocity’s ad there nonetheless — without permission from Google, and without payment to Google.

As shown in the video I preserved, clicking the ad takes users through to the Travelocity site. The full list of URLs associated with this ad placement:

http://64.40.99.166/adrotate.php
http://ad.yieldx.com/imp?z=6&Z=728×90&s=41637&u=http%3A%2F%2Fwww.google.com…
http://ad.yieldmanager.com/imp?z=6&Z=728×90&s=41637&u=http%3A%2F%2Fwww.goog…
http://ad.yieldx.com/iframe3?jwIAAKWiAABdAwIA5soAAAAAxAEAAAAACwADBAAABgMKxQ…
http://ad.yieldmanager.com/iframe3?jwIAAKWiAABdAwIA5soAAAAAxAEAAAAACwADBAAA…
http://network.realmedia.com/RealMedia/ads/adstream_sx.ads/iconmedianetwork…
http://network.realmedia.com/RealMedia/ads/click_lx.ads/iconmedianetworks/e…
http://clk.atdmt.com/AST/go/247mancr0020000002ast/direct;at.astncr00000121;…
http://leisure.travelocity.com/RealDeals/Details/0,2941,TRAVELOCITY_CRU_354…

As shown in the URL log and packet log, Fullcontext initiated the ad placement by sending traffic to the Yieldx ad network. (Yieldx’s Whois reports an address in Hong Kong. But Yieldx is hosted at an IP block registered to Ad|Median, an ad network with headquarters near Minneapolis.) Using the Right Media Exchange marketplace (yieldmanager.com), Yieldx/Ad|Median then sold the traffic to Icon Media Networks (now Vizi Media of LA and New York), which placed the Travelocity ad. The diagram at right depicts the chain of relationships.

This placement is typical of the Fullcontext injector. I have tracked numerous Fullcontext placements, through multiple controlling servers. I retain many dozens of examples on file. See also prior examples posted to my public site: 1, 2, 3.

The Fullcontext injector falls far short of the requirements of Travelocity’s Assurance of Discontinuance. For one, users often receive Fullcontext without agreeing to install it — through exploits and in undisclosed bundles (violating Travelocity Assurance page 4, provision 11.a; PDF page 11). Furthermore, Fullcontext’s ads lack any branding indicating what adware program delivered them — violating Assurance provision 11.b, which requires such branding to appear prominently on each adware advertisement. Fullcontext’s uninstall and legacy user functions also fail to meet the requirements set out in the Assurance.

Example 2: Fullcontext and Motive Interactive Injecting Cingular Ad Into Google

A CingularAd Injected into Google by Fullcontext A Cingular Ad Injected into Google by Fullcontext

Cingular
money viewers
   Motive Interactive   
money viewers
Fullcontext

The Money Trail – How Cingular Pays Fullcontext

Through the MovieInteractive ad network, Fullcontext also injects the Cingular ad into Google. See screenshot at right, taken on February 17. On a PC with Fullcontext spyware installed (controlling server 64.40.99.166), I requested www.google.com. I received the image shown in the thumbnail at right — with a prominent Cingular banner ad appearing above Google. As in the case of Travelocity, this ad appeared without permission from Google and without payment to Google. Rather, the ad was placed into Google’s site by Fullcontext spyware.

The full list of URLs associated with this ad placement:

http://64.40.99.166/adrotate.php
http://ad.motiveinteractive.com/imp?z=6&Z=728×90&s=161838&u=http%3A%2F%2Fwww.goo…
http://ad.yieldmanager.com/imp?z=6&Z=728×90&s=161838&u=http%3A%2F%2Fwww.google.c…
http://ad.motiveinteractive.com/iframe3?jwIAAC54AgD5QwMAtVQBAAIAZAAAAP8AAAAHEQAA…
http://ad.yieldmanager.com/iframe3?jwIAAC54AgD5QwMAtVQBAAIAZAAAAP8AAAAHEQAABgTud…
http://clk.atdmt.com/goiframe/21400598/rghtccin0470000088cnt/direct;wi.728;hi.90…
http://www.cingular.com/cell-phone-service/cell-phone-details/?q_list=true&q_pho…

As shown in the URL log and packet log, Fullcontext sent traffic to Motive Interactive, a Nevada ad network. Using the Right Media Exchange marketplace (yieldmanager.com), Motive Interactive sold the traffic to Cingular. The diagram at right depicts the chain of relationships. Notice that Cingular’s relationship with Fullcontext is one level shorter than the Travelocity relationship in Example 1.

Cingular should have known that this traffic was coming from spyware, because detailed information about the ad placement was sent to Cingular’s web servers whenever a user clicked a FullContext-placed ad. The packet log shows the information sent to the Atlas servers operating on Cingular’s behalf:

http://view.atdmt.com/CNT/iview/rghtccin0470000088cnt/direct;wi.728;hi.90/01?click=http:// ad.motiveinteractive.com/click,jwIAAC54AgD5QwMAtVQBAAIAZAAAAP8AAAAHEQAABgTudAIAmUcCAPqaAAC
iJAIAAAAAAAAAAAAAAAAAAAAAAKdz10UAAAAA,,http%3A%2F%2Fwww%2Egoogle%2Ecom%2F,

The first portion of the URL specifies what ad is to be shown, while the portion following the question mark reports how traffic purportedly reached this ad. (This information structure is standard for Right Media placements.) Notice the green highlighted text — telling Atlas (and in turn Cingular) that this ad was purportedly shown at www.google.com. But Atlas and Cingular should know that the www.google.com page does not sell banner ads to any advertiser at any price. The purported placement is therefore impossible — unless the ad was actually injected into Google’s site using spyware. The presence of this Google URL in Cingular’s referer log should have raised alarms at Cingular and should have prompted further investigation.

Example 3: Deskwizz/Searchingbooth and Ad-Flow (Rydium) Injecting Travelocity Ad Into True.com

A Travelocity Ad Injected into True.com by Searchingbooth A Travelocity Ad Injected into True.com by Searchingbooth

Travelocity
money viewers
   Ad-Flow (Rydium)  
money viewers
Deskwizz/Searchingbooth

The Money Trail – How Travelocity Pays Searchingbooth

Fullcontext is just one of several active ad injectors that place ads into other companies’ sites. The screenshot at right shows a injection performed by Deskwizz/Searchingbooth. In March 9 testing, I requested True.com. Deskwizz placed a large (720×300) pixel banner into the top of the page (not shown), and another into the bottom. This latter banner, shown in the thumbnail at right, promoted Travelocity. Just as the preceding examples occurred without payment to or permission from Google, this placement occurred without payment to or permission from True.com. Rather, the ad was placed into Google’s site by Deskwizz/Searchingbooth spyware.

The full list of URLs associated with this ad placement:

http://servedby.headlinesandnews.com/media/servlet/view/banner/unique/url/strip?…
http://www.uzoogle.com/indexP.php?PID=811
http://www.uzoogle.com   [posted parameter: PID=811]
http://ad.ad-flow.com/imp?z=2&Z=300×250&s=118935&u=http%3A%2F%2Fwww.uzoogle.com%…
http://ad.yieldmanager.com/imp?z=2&Z=300×250&s=118935&u=http%3A%2F%2Fwww.uzoogle…
http://ad.doubleclick.net/adj/N447.rightmedia.com/B2130591.2;sz=300×250;click0=h…

As shown in the URL log and packet log, Deskwizz/Searchingbooth sent traffic to its Uzoogle ad loader, which forwarded the traffic onwards to Ad-Flow. (Ad-flow is the ad server of Rydium, a Toronto ad network.) The traffic then flowed through to the Right Media Exchange marketplace (yieldmanager.com), where it was sold to Travelocity. The diagram at right depicts the chain of relationships.

This placement is typical of Deskwizz/Searchingbooth. I have tracked a web of domain names operated by this group — including Calendaralerts, Droppedurl, Headlinesandnews, Z-Quest, and various others — that all receive traffic from and through similar banner injections. Z-quest.com describes itself as a “meta-search” site, while Uzoogle presents itself as offering Google-styled logos and branded search results. But in fact these sites all serve to route, frame, and redirect spyware-originating traffic, as shown above. I retain many dozens of examples on file. See also the multiple examples I have posted to my public site: 1, 2, 3, 4, 5.

Example 4: Deskwizz/Searchingbooth and Right Media Injecting Cingular Ad Into True.com

A Cingular Ad Injected into True.com by Searchingbooth A Cingular Ad Injected into True.com by Searchingbooth

Cingular
money viewers
   Yield Manager / Right Media Exchange  
money viewers
Deskwizz/Searchingbooth

The Money Trail – How Cingular Pays Searchingbooth

Deskwizz/Searchingbooth also injects Cingular ads into third parties’ sites, including into True.com. The screenshot at right shows the resulting on-screen display (as observed on March 9). The screenshot depicts a Cingular ad placed into True.com without True’s permission and without payment to True.

The full list of URLs associated with this ad placement:

http://servedby.headlinesandnews.com/media/servlet/view/banner/unique/url/strip?…
http://optimizedby.rmxads.com/st?ad_type=ad&ad_size=728×90&section=160636
http://ad.yieldmanager.com/imp?Z=728×90&s=160636&_salt=3434563176&u=http%3A%2F%2…
http://optimizedby.rmxads.com/iframe3?6B4AAHxzAgD5QwMAtVQBAAIAAAAAAP8AAAAGFAAABg…
http://ad.yieldmanager.com/iframe3?6B4AAHxzAgD5QwMAtVQBAAIAAAAAAP8AAAAGFAAABgJQF…
http://clk.atdmt.com/goiframe/22411278/rghtccin0470000088cnt/direct;wi.728;hi.90…

As shown in the URL log and packet log, Deskwizz/Searchingbooth sent traffic to the Right Media‘s Rmxads. The traffic then flowed through to the Right Media Exchange marketplace (yieldmanager.com), where it was sold to Cingular. The diagram at right depicts the chain of relationships.

Cingular should have known that this ad was appearing through spyware injections for the same reason presented in Example 2. In particular, the packet log reveals that specific information about ad context was reported to Cingular’s server whenever a user clicked an injected ad. This context information put Cingular on notice as to where its ads were appearing — including sites on which Cingular had never sought to advertise, and even including sites that do not accept advertising.

Example 5: Web Nexus, Traffic Marketplace Promoting Travelocity in Full-Screen Pop-Up Ads

Web Nexus Promotes Travelocity - Full-Screen Pop-Up Web Nexus Promotes Travelocity Using a Full-Screen Pop-Up

Travelocity
money viewers
   Traffic Marketplace   
money viewers
Web Nexus

The Money Trail – How Travelocity Pays Web Nexus

Although the four preceding examples all show banner ad injections, pop-up ads remain the most common form of spyware advertising. Spyware-delivered pop-ups continue to promote both Cingular and Travelocity. For example, Web Nexus is widely installed without consent (example) and in big bundles without the disclosures required by the Travelocity’s Assurance of Discontinuance. Yet Web Nexus continues to promote Travelocity through intrusive full-screen pop-ups, like that shown at right (taken on February 22). Indeed, this pop-up is so large and so intrusive that it even covers the Start button — preventing users from easily switching to another program or window.

The Travelocity ad at issue is also striking for its lack of branding or other attribution. A user who manages to move the pop-up upwards will find a small “Web Nexus” footer at the ad’s bottom edge. But this label initially appears substantially off-screen and hence unreadable. In contrast, Travelocity’s Assurance of Discontinuance (Travelocity section, page 4, provision 11.b; PDF page 11) requires that each adware-delivered advertisement be branded with a “prominent” name or icon. Because it appears off-screen, Web Nexus’s ad label cannot satisfy the NYAG’s prominence requirement. Furthermore, packet log analysis reveals that this placement is the foreseeable result of Web Nexus’s design decisions. Further discussion and analysis.

The full list of URLs associated with this ad placement:

http://stech.web-nexus.net/cp.php?loc=295&cid=9951709&u=ZWJheS5jb20v&en=&pt=3…
http://stech.web-nexus.net/sp.php/9157/715/295/9951709/527/
http://t.trafficmp.com/b.t/e48U/1172127347
http://cache.trafficmp.com/tmpad/content/clickhere/travelocity/0107/contextu…

As shown in the URL log and packet log, Web Nexus sent traffic to Traffic Marketplace (a New York ad network owned by California’s Vendare Media). The traffic then flowed through to Travelocity. The diagram at right depicts the relationships.

Example 6: Targetsaver, EasilyFound, LinkShare Promoting Cingular in Full-Screen Pop-Up Ads

TargetSaver Promotes Cingular Using a Full-Screen Pop-Up TargetSaver Promotes Cingular Using a Full-Screen Pop-Up

Cingular
money viewers
   LinkShare  
money viewers
   EasilyFound  
money viewers
TargetSaver

The Money Trail – How Cingular Pays TargetSaver

In testing of March 8, I searched for “get ringtones” at Google. I received the full-screen pop-up shown at right. This pop-up was served to me by TargetSaver spyware, widely installed consent (example) and with misleading and/or hidden disclosures (1, 2). These installation practices cannot meet Cingular’s duties under its Assurance of Discontinuance (Cingular section, page 4, provision 14.a; PDF page 18).

The full list of URLs associated with this ad placement:

http://a.targetsaver.com/adshow
http://www.targetsaver.com/redirect.php?…www.easilyfound.com%2Fa%2F2.php…
http://www.easilyfound.com/a/2.php?cid=1032
http://www.easilyfound.com/a/3.php?cid=1032
http://click.linksynergy.com/fs-bin/click?id=MCVDOmK0318&offerid=91613.100…
http://www.cingular.com/cell-phone-service/cell-phone-sales/free-phones.js…

As shown in the URL log and packet log, TargetSaver sent traffic to EasilyFound. EasilyFound then forwarded the traffic on to LinkShare, a New York affiliate network, which sent the traffic to Cingular.

Cingular should have known that a partnership with EasilyFound would entail Cingular ads being shown through spyware. EasilyFound describes itself as “a metacrawler search engine.” But in my extended testing, EasilyFound widely buys spyware-originating traffic and sends that traffic onwards to affiliate merchants (Cingular among others). I have previously described this general practice in multiple articles on my public web site. I have also publicly documented this very behavior by EasilyFound specifically. In May 2006 slides, I showed EasilyFound buying traffic from Targetsaver and sending that traffic onwards to LinkShare and Walmart. I even posted an annotated packet log and traffic flow diagram. My slides have been available on the web for approximately ten months. Yet, by all indications, this affiliate remains in good standing at LinkShare and continues the same practices I documented last year.

According to Whois data, EasilyFound is based in Santa Monica, California, although EasilyFound’s Contact page gives no street address.

Additional Examples on File

The preceding six examples are only a portion of my recent records of spyware-originating ads from Cingular and Travelocity. I retain additional examples on file. My additional examples include additional banner injections, additional pop-ups, additional traffic flowing through Cingular’s affiliate program (LinkShare), and traffic flowing through Travelocity’s affiliate program (Commission Junction).

In my extended testing during the past two months, I have recorded only a single example of Priceline ads shown by spyware. That placement occurred through Priceline’s affiliate program, operated by Commission Junction.

The Scope of the Problem

The Assurances of Discontinuance reflect the remarkable size of the advertising expenditures that triggered the New York Attorney General’s intervention.

  Cingular Wireless (AT&T) Priceline Travelocity
Amount spent with Direct Revenue At least $592,172 At least $481,765.05 At least $767,955.93
Duration of Direct Revenue relationship April 1, 2004 through October 11, 2005 May 1, 2004 through February 24, 2006 July 1, 2004 through April 15, 2006
Number of ads shown At least 27,623,257 At least 6,142,395 At least 2,103,341
Knowledge of Direct Revenue’s practices “Even though Cingular was aware of controversy surrounding the use of adware and was aware, or should have been aware, of Direct Revenue’s deceptive practices, including surreptitious downloads, Cingular continued to use Direct Revenue.” “Priceline knew that consumers had downloaded Direct Revenue adware without full notice and consent and continued to receive ads through that software.” “Travelocity was aware that Direct Revenue had … been the subject of consumer complaints that Direct Revenue had surreptitiously installed its software on consumers’ computers without adequate notice.”
Additional factors listed by NYAG   “Some of Priceline’s advertisements were delivered directly to consumers from web servers owned or controlled by Priceline.”  
Payment to New York $35,000 of investigatory costs and penalties $35,000 of investigatory costs and penalties $30,000 of investigatory costs and penalties

These three advertisers alone paid more than $1.8 million to Direct Revenue — approximately 2% of Direct Revenue’s 2004-2005 revenues. See detailed Direct Revenue financial records.

Bad Practices Continue at Zango, Notwithstanding Proposed FTC Settlement and Zango’s Claims with Eric Howes; updated December 8, 2006

Earlier this month, the FTC announced the proposed settlement of its investigation into Zango, makers of advertising software widely installed onto users’ computers without their consent or without their informed consent (among other bad practices).

We commend the proposed settlement’s core terms. But despite these strong provisions, bad practices continue at Zango — practices that, in our judgment, put Zango in violation of the key terms and requirements of the FTC settlement. We begin by explaining the proposed settlement’s requirements. We then present eight types of violations of the proposed settlement, with specific examples of each. We conclude with recommendations and additional analysis.

Except where otherwise indicated, this document describes only downloads we tested during November 2006 — current, recent installations and behaviors.

Zango’s Burdens Under the Proposed FTC Settlement

The FTC’s proposed settlement with Zango imposes a number of important requirements and burdens on Zango, including Zango’s installation and advertising practices. Specifically, the settlement:

  • Prohibits Zango from using “any legacy program to display any advertisement to, or otherwise communicate with, a consumer’s computer.” (settlement I)
  • Prohibits Zango from (directly or via third parties) “exploit[ing] a security vulnerability … to download or install onto any computer any software code, program, or content.” (II)
  • Prohibits from Zango installing software onto users’ computers without “express consent.” Obtaining “express consent” requires “clearly and prominently disclos[ing] the material terms of such software program or application prior to the display of, and separate from, any final End User License Agreement.” (III) Defines “prominent” disclosure to be, among other requirements, “unavoidable.” (definition 5)
  • Requires Zango to “provide a reasonable and effective means for consumers to uninstall the software or application,” e.g. through a computers’ Add/Remove utility. (VII)
  • Requires Zango to “clearly and prominently” label each advertisement it displays. (VI)

These are serious burdens and requirements that, were they zealously satisfied by Zango, would do much to protect consumers from the numerous nonconsensual and misleading Zango installations we have observed.

Zango Is Not In Compliance with the Proposed Settlement

Zango has claimed that it “has met or exceeded the key notice and consent standards detailed in the FTC consent order since at least January 1, 2006.”

Despite Zango’s claim, we continue to find ongoing installations of Zango’s software that fall far short of the proposed settlement’s burdens, requirements, and standards. The example installations that we present below establish that Zango’s current installation and advertising practices remain in violation of the terms and requirements of the proposed settlement.

  • “Material Terms” Disclosed Only in EULA
    Zango often announces “material terms” only in its End User License Agreement, not in the more prominent locations required by the proposed settlement. (Examples A, B)
  • “Material Terms” Omitted from Disclosure
    Zango often omits “material terms” from its prominent installation disclosures — failing to prominently disclose facts likely to affect consumers’ decisions to install Zango’s software. (Examples A, B, C)
  • Disclosures Not Clear & Prominent 
    Zango presents disclosures in a manner and format such that these disclosures fail to gain the required “express consent” of users because the disclosures are not “clearly and prominently” displayed. (Examples B, E, F)
  • Disclosures Presented Only After Software Download & Execution
    Zango presents disclosures only after the installation and execution of Zango’s software on the users’ computers has already occurred, contrary to the terms of the proposed settlement. (Examples C, F)
  • No Disclosure Provided Whatsoever
    Some Zango software continues to become installed with no disclosure whatsoever. (Example D)
  • Installation & Servicing of Legacy Programs
    Older versions of Zango’s software — versions with installation, uninstallation, and/or disclosure inconsistent with the proposed settlement — continue to become installed and to communicate with Zango servers. (Examples C, D, E, F)
  • Installations Promoted & Performed through Miscellaneous Other Deceptive Means & Circumstances
    Zango installs are still known to be promoted and performed in or through a variety of miscellaneous practices that can only be characterized as deceptive. (Multiple examples in section G)
  • Unlabeled Advertising
    Some Zango advertisements lack the labeling required by the proposed settlement. (Multiple examples in section H)

These improper practices remain remarkably easy to find, and we have numerous additional recent examples on file. Moreover, these problems are sufficiently serious that they cast doubt on the efficacy and viability of the FTC’s proposed settlement as well as Zango’s ability to meet the requirements of the settlement.

Example A: Zango’s Ongoing Misleading Installations On and From Its Own Servers

The proposed settlement requires “express consent” before software may be “install[ed] or “download[ed]” onto users’ PCs (III). The term “prominent” is defined to mean “clear[] and prominent[]” disclosure of “the material terms” of the program to be installed, and most of Zango’s recent installation disclosures seem to meet this standard. But we are concerned by what those disclosures say. In our view, the disclosures omit the material facts Zango is obliged to disclose.

Although the proposed settlement does not explain what constitute “material” terms, other FTC authority provides a definition. The FTC’s Policy Statement on Deception, holds that a material fact is one “likely to affect the consumer’s conduct or decision with regard to a product or service.”

From our analysis of Zango’s software, we think Zango has two material features — two features particularly likely to affect a reasonable user’s decision to install (or not install) Zango software. First, users must know that Zango will give them extra pop-up ads — not just “advertisements,” but pop-ups that appear in separate, freestanding windows. Second, users must know that Zango will transmit detailed information to its servers, including information about what web pages they view, and what they search for.

A Misleading Zango Installer Appearing Within Windows Media Player A Misleading Zango Installer Appearing Within Windows Media Player

Unfortunately, many of Zango’s installations fail to include these disclosures with the required prominence. Consider the screen shown at right. Here, Zango admits that it shows “advertisements,” but Zango fails to disclose that its ads appear in pop-ups. Zango’s use of the word “advertisements,” with nothing more, suggests that Zango’s ads appear in standard advertising formats — formats users are more inclined to tolerate, like ordinary banner ads within web pages (e.g. the ads at nytimes.com) or within other software programs (e.g. the ads in MSN Messenger). In fact Zango’s pop-up ads are quite different, in that they appear in pop-ups known to be particularly annoying and intrusive. But the word “advertisements” does nothing to alert users to this crucial fact.

Zango also fails to disclose that its servers receive detailed information about users’ online behavior. Zango tell users that ads are “based on” users’ browsing. But this disclosure is not enough, because it omits a material fact. In particular, the disclosure fails to explain that users’ behavior will be transmitted to Zango, a fact that would influence reasonable users’ decision to install Zango.

In addition, Zango’s description of its toolbar omits important, material effects of the toolbar — namely, that the toolbar will show distracting animated ads. Zango says only that the toolbar “lets [users] search the Internet from any webpage” — entirely failing to mention the toolbar’s advertising,

We’re also concerned about the format and circumstances of these installation screens. Zango’s installation request appears in a Windows Media “license acquisition” screen — a system Microsoft provides for bona fide license acquisition, not for the installation of spyware or adware. Zango’s installer appears within Windows Media Player — a context where few users will expect to be on the lookout for unwanted advertising software, particularly when users had merely sought to watch a video, not to install any software whatsoever. Furthermore, the button to proceed with installation is misleadingly labeled “Play Now” — not “I Accept,” Install,” or any other caption that might alert users to the consequences of pressing the button. The screen’s small size further adds to user confusion: At just 485 by 295 pixels, the window doesn’t have room to explain the material effects of Zango’s software, even with Zango’s extra-small font. (In Zango’s main disclosure, capital letters are just seven pixels tall.) Furthermore, a user seeking to read Zango’s EULA (as embedded in these installation screens) faces a remarkable challenge: The 3,033 word document is shown in a box just five lines tall, therefore requiring fully 53 on-screen pages to view in full. Finally, if a user ultimately presses the “Play Now ” button, then the “Open” button on the standard Open/Save box that follows, Zango installs immediately, without any further opportunity for users to learn more or to change their mind. Such a rapid installation is contrary to standard Windows convention of further disclosures within an EXE installer, providing further opportunities for users to learn more and to change their minds. Video capture of this installation sequence.

All in all, we think typical users would be confused by this screen — unable to figure out who it comes from, what it seeks to do, or what exactly will occur if they press the Play Now button. A more appropriate installation sequence would use a standard format users better understand (e.g. a web page requesting permission to install), would tell users far more about the software they’re receiving, and would label its buttons far more clearly.

These installations are under Zango’s direct control: They are loaded directly from Zango’s servers. Were Zango so inclined, it could immediately terminate this installation sequence, or it could rework these installations, without any cooperation with (or even requests to) its distributors.

Example B: Zango’s Ongoing Misleading Hotbar Installations On and From Its Own Servers

Hotbar's Initial Installation Solicitation - Silent as to Hotbar's Effects Hotbar’s Initial Installation Solicitation – Silent as to Hotbar’s Effects

Hotbar's ActiveX Installer - Without Disclosure of Material Effects Hotbar’s ActiveX Installer – Without Disclosure of Material Effects

Final Step in Hotbar Installation - No Cancel Button, No Disclosure of Material Effects Final Step in Hotbar Installation – No Cancel Button, No Disclosure of Material Effects

The “express consent” required under the proposed settlement applies not just to software branded as “Zango,” but also to all other software installed or downloaded by Zango. (See “any software” in section III.) The “express consent” requirement therefore applies to Hotbar-branded software owned by Zango as a result of Zango’s recent merger with Hotbar. But Hotbar installations fail to include unavoidable disclosures of material effects, despite the requirements in the proposed settlement.

Consider the Hotbar installation shown in this video and in the screenshots at right. The installation sequence begins with an ad offering “free new emotion icons” (first screenshot at right) — certainly no disclosure of the resulting advertising software, the kinds of ads to be shown, or the significant privacy effects. If a user clicks that ad, the user receives the second screenshot at right — a bare ActiveX screen, again lacking a substantive statement of material effects of installing. If the user presses Yes in the ActiveX screen, the user receives the third screen at right — disclosing some features of Hotbar (e.g. weather, wallpapers, screensavers), and vaguely admitting that Hotbar is “ad supported,” but saying nothing whatsoever about the specific types of ads (e.g. intrusive in-browser toolbar animations) nor the privacy consequences. Furthermore, this third screen lacks any button by which users can decline or cancel installation. (Note the absence of any “cancel” button, or even an “x” in the upper-right corner.)

This installation sequence is substantially unchanged from what Edelman reported in May 2005.

This installation lacks the unavoidable material disclosures required under the proposed settlement. We see no way to reconcile this installation sequence with the requirements of the proposed settlement.

Example C: Incomplete, Nonsensical, and Inconsistent Disclosures Shown by Aaascreensavers Installing Zango Software

Aaascreensavers' Initial Zango Prompt - Omitting Key Material Information Aaascreensavers’ Initial Zango Prompt – Omitting Key Material Information

Zango's Subsequent Screen -- with deficiencies set out in the text at left Zango’s Subsequent Screen — with deficiencies set out in the text at left

We also remain concerned about third parties installing Zango’s software without the required user consent. Zango’s past features a remarkable serious of bad-actor distributors, from exploit-based installers to botnets to faked consent. Even today, some distributors continue to install Zango without providing the required “clear and prominent” notice of “material” effects.

Consider an installation of Zango from Aaascreensavers.com. Aaascreensavers provides a generic “n-Case” installation disclosure that says nothing about the specifics of Zango’s practices — omitting even the word “advertisements,” not to mention “pop-ups” or privacy consequences. (See first screenshot at right.) Furthermore, Aaascreensavers fails to show or even reference a EULA for Zango’s software. Nonetheless, Aaascreensavers continues to place Zango software onto users’ PCs through these installers.

Particularly striking is the nonsensical screen that appears shortly after Aaascreensavers installs Zango. (See second screenshot at right.) Beneath a caption labeled “Setup,” the screen states “the content on this site is free, thanks to 180search Assistant” — although the user has just installed a program (and is not browsing a site), and the program the user (arguably) just agreed to install was called “n-Case” not “180search Assistant.” At least as paradoxically, the “Setup” screen asks users to choose between “Uninstall[ing] 180search Assistant” and “Keep[ing]” the software. Since “180search Assistant” is software reasonable users will not even know they have, this choice is particularly likely to puzzle typical users. After all, it is nonsense to speak of a user making an informed decision to “keep” software he didn’t know he had.

Crucially, both installation prompts omit the material information Zango must disclose under its settlement obligations: Neither prompt mentions that ads will be shown in pop-ups, nor do they mention the important privacy effects of installing Zango software.

Video capture of this installation sequence.

Example D: Msnemotions Installing Zango with No Disclosure At All

Msnemotions continues to install Zango software with no disclosure whatsoever. In particular, Msnemotions never shows any license agreement, nor does it mention or reference Zango in any other on-screen text, even if users fully scroll through all listings presented to them. Video proof.

This installation is a clear violation of section III of the proposed FTC settlement. That section prohibits Zango “directly, or through any person [from] install[ing] or download[ing] … any software program or application without express consent.” Here, no such consent was obtained, yet Zango software downloaded and installed anyway.

In our tests, this Zango installation did not show any ads (although it did contact a Zango server and download a 20MB file). Nonetheless, the violation of section III occurs as soon as the Zango software is downloaded onto the user’s computer, for lack of the requisite disclosure and consent.

Example E: Emomagic Installing Zango with an Off-Screen Disclosure

Emomagic First Mentions Zango Five Pages Down In Its EULA
Emomagic First Mentions Zango 5 Pages Down In Its EULA

Emomagic continues to install Zango software with a disclosure buried five pages within its lengthy (23 on-screen-page) license agreement. That is, unless a user happened to scroll to at least the fifth page of the Emomagic license, the user would not learn that installing Emomagic installs Zango too. Video proof.

This installation is a clear violation of the proposed FTC settlement, because the hidden disclosure of Zango software is not “unavoidable.” In contrast, the proposed Settlement’s provision III and definition 5 define “prominent” disclosures to be those that are unavoidable, among other requirements.

We have additional examples on file where the first mention of Zango comes as far as 64 pages into a EULA presented in a scroll box. See also example F, below, where Zango appears 44 pages into a EULA, after the GPL.

Example F: Warez P2P Speedup Pro Installing Zango with an Off-Screen Disclosure

Warez P2P First Mentions Zango at Page 44 of its EULA, Below the GPL Warez P2P First Mentions Zango at Page 44 of its EULA, Below the GPL

Warez P2P Speedup Pro continues to install Zango software with a disclosure buried 44 pages within its lengthy license agreement. Video proof. Users are unlikely to see mention of Zango in part because Zango’s first mention comes so far down within the EULA.

Users are particularly unlikely to find Zango’s EULA because the first 43 pages of the EULA scroll box show the General Public License (GPL). (Screenshot of the first page, giving no suggestion that anything but the GPL appears within the scroll box.) Sophisticated users may already be familiar with this license, which is known for the many rights it grants to users and independent developers. Recognizing this pro-consumer license, even sophisticated users are discouraged from reviewing the scroll box’s contents in full — making it all the less likely that they will find the Zango license further down.

After installation, Warez P2P Speedup Pro proceeds to the second screen shown in Example C, above. The video confirms the special deceptiveness of this screen: If a user chooses the “uninstall” button — exercising his option (however deceptively mislabeled) to refuse Zango’s software — the user then receives a further screen attempting to get the user to change his mind and accept installation after all. The substance of this screen is especially deceptive — asking the user whether he wants to “cancel,” when in fact he had never elected even to start the Zango installation sequence in the first place. Finally, if the user presses the “Exit Setup” button on that final screen, the user is told he must restart his computer — a particularly galling and unnecessary interruption.

Section G: Zango Installations Predicated on Consumer Deception or on Use of Other Vendors’ Spyware

A Zango Ad Injected into Google by FullContext A Zango Ad Injected into Google by FullContext

We have also observed Zango installs occurring subsequent to consumer deception or other vendors sending spyware-delivered traffic to Zango.

Fullcontext spyware promoting Zango. We have observed Fullcontext spyware (itself widely installed without consent) injecting Zango ads into third parties’ web sites. Through this process, Zango ads appear without the permission of the sites in which they are shown, and without payment to those sites. These ads even appear in places in which no banner ads are not available for purchase at any price. See e.g. the screenshot at right, showing a Zango banner ad injected to appear above Google’s search results.

Typosquatters promoting Zango. Separately, Websense and Chris Boyd recently documented Zango installs commencing at “Yootube”. “Yootube” is a clear typosquat on the well-known “Youtube” site — hoping to reach users who mistype the address of the more popular site. If users reach the misspelled site, they will be encouraged to install Zango. Such Zango installations are predicated on a typosquat, e.g. on users reaching a site other than what they intended — a particularly clear example of deception serving a key role in the Zango installation process.

Spyware bundlers promoting Zango. In our testing of summer and fall 2006, we repeatedly observed Zango “S3” installer programs downloaded onto users’ computers by spyware-bundlers themselves operating without user consent (e.g. DollarRevenue and TopInstalls). Users received these Zango installation prompts among an assault of literally dozens of other programs. Any consent obtained through this method is predicated on an improper, nonconsensual arrival onto users’ PCs — a circumstance in which we think users cannot grant informed consent. Furthermore. the proposed settlement requires “express consent” before “installing or downloading” (emphasis added) “any software” onto users’ PCs (section III). Zango’s S3 installer is a “software program” within the meaning of the proposed settlement, yet DollarRevenue and TopInstalls downloaded this program onto users’ computers without consent. So these downloads violate the plain language of the proposed settlement, even where users ultimately refuse to install Zango software.

Update (December 8): We have uncovered still other Zango installations predicated on deception, including on phishing at MySpace. We discuss these improper practices in our follow-up comment to the FTC. Our bottom line: These Zango installs are disturbing not because they put zango in violation of hte terms of hte proposed settlement, but precisely because they do not — because tehse isntallations, disturbing though they may be, do not clearly violate any of the settlement’s requirements. These installations raise the alarming prospect that this settlement could allow Zango to continue to pay distributors to create malicious and/or deceptive software and web pages.

Section H: Unlabeled Ads

Today CDT filed a further comment about the FTC’s proposed settlement, focusing in part on Zango’s recent display of unlabeled ads, again specifically contrary to Zango’s obligations under the proposed settlement (section VI). CDT has proof of 39 unlabeled ads — 10% of their recent partially-automated tests — in which Zango’s pop-up ads lacked the labeling required under the proposed settlement. CDT explains that the ads “provide[d] absolutely no information that would allow consumers to correlate the advertisements’ origins to Zango’s software.”

We share CDT’s concern, because we too have repeatedly seen these problems. For example, this video shows a Zango ad served on November 19, 2006 — with labeling that disappears after less than four seconds on screen (from 0:02 to 0:06 in the video). Furthermore, Edelman first reported this same problem in July 2004: That when ads include redirects (as many do), Zango’s labeling often disappears. Compliance with the proposed settlement requires that Zango’s labeling appear on each and every ad, not just on some of the ads or even on most of the ads. So, here too, Zango is in breach of the proposed settlement.

Furthermore, the proposed settlement’s labeling requirement applies to “any advertisement” Zango serves — not just to Zango’s pop-ups, but to other ads too. Zango’s toolbars show many ads, as depicted in the screenshots below. Yet these toolbars lack the labeling and hyperlinks required by the proposed settlement. These unlabeled toolbars therefore constitute an additional violation of Zango’s duties under the proposed settlement.


Zango and Zango/Hotbar Toolbars Without the Labeling Required under the Proposed Settlement

The Size of Zango’s Payment to the FTC

We are puzzled by the size of the cash payment to be made by Zango. We understand that the FTC’s authority is limited to reclaiming ill-gotten profits, not to extracting penalties. But we think Zango’s profits to date far exceed the $3 million payment specified in the proposed settlement.

Available evidence suggests Zango’s company-to-date profits are substantial, probably beyond $3 million. As a threshold matter, Zango’s business is large: Zango claims to have 20 million active users at present (albeit with some “churn” as users manage to uninstall Zango’s software). Furthermore, Zango’s revenues are large: Zango recently told a reporter of daily revenues of $100,000 (i.e. $36 million per year), a slight increase from a 2003 report of $75,000 per day. With annual revenues on the order of $20 to $40 million, and with three years of operation to date, we find it inconceivable that Zango has made only $3 million of profit.

Zango’s prior statements and other companies’ records also both indicate that Zango’s profits exceed $3 million. A 2005 Forbes article confirms high profits at Zango, reporting “double-digit percentage growth in profits” — though without stating the baseline level of profits. But financial records from competing “adware” vendor Direct Revenue indicate a remarkable 75%+ profit margin: In 2004, DR earned $30 million of pre-tax profit on $38 million of revenue. Because Zango’s business is in many respects similar to DR, Zango’s profit margin is also likely to be substantial, albeit reduced from the 2004-era “adware” peak. Even if Zango’s profit margin were an order of magnitude lower, i.e. 7%, Zango would still have earned far more than $3 million profits over the past several years.

If Zango’s profits substantially exceed $3 million, as we think they do, the settlement’s payment is only a slap on the wrist. A tougher fine — such as full disgorgement of all company-to-date profits worldwide — would better send the message that Zango’s practices are and have been unacceptable.

Zango’s Statements and the Need for Enforcement

In its November 3 press release, Zango claims its reforms are already in place. “Every consumer downloading Zango’s desktop advertising software sees a fully and conspicuously disclosed, plain-language notice and consent process,” Zango’s press release proclaims. This claim is exactly contrary to the numerous examples we present above. Zango further claims that it “has met or exceeded the key notice and consent standards detailed in the FTC consent order since at least January 1, 2006” — again contrary to our findings that nonconsensual and deceptive installations remain ongoing.

From the FTC’s press release and from recent statements of FTC commissioners and staff, it appears the FTC intends to send a tough message to makers of advertising software. We commend the FTC’s goal. The proposed settlement, if appropriately enforced, might send such a message. But we worry the FTC will send exactly the opposite message if it allows Zango to claim compliance without actually doing what the proposed settlement requires.

As a first step, we endorse CDT’s suggestion that the FTC require Zango to retract its claim of compliance with the proposed settlement. Zango’s statement is false, and the FTC should not stand by while Zango mischaracterizes its behavior vis-a-vis the proposed settlement.

More broadly, we believe intensive ongoing monitoring will be required to assure that Zango actually complies with the settlement. We have spent 3+ years following Zango’s repeated promises of “reform,” and we have first-hand experience with the wide variety of techniques Zango and its partners have used to place software onto users’ PCs. Testing these methods requires more than black-letter contracts and agreements; it requires hands-on testing of actual infected PCs and the scores of diverse infection mechanisms Zango’s partners devise. To assure that Zango actually complies with the agreement, we think the FTC will need to allocate its investigatory resources accordingly. We’ve spent approximately 10 hours on the investigations leading to the results above, and we’ve uncovered these examples as well as various others. With dozens or hundreds of hours, we think we could find many more surviving Zango installations in violation of the proposed settlement’s requirements. We think the FTC ought to find these installations, or require that Zango do so, and then ought to see that the associated files are entirely removed from the web.

Update (December 8): Our follow-up comment to the FTC discusses additional concerns, further ongoing bad practices at Zango, and the special difficulty of enforcement in light of practices seemingly not prohibited by the proposed settlement.

Intermix Revisited

I recently had the honor of serving as an expert witness in The People of the State of California ex. rel. Rockard J. Delgadillo, Los Angeles City Attorney v. Intermix Media, Inc., Case No. BC343196 (L.A. Superior Court), litigation brought by the City Attorney of Los Angeles (on behalf of the people of California)against Intermix. Though Intermix is better known for creating MySpace, Intermix also made spyware that, among other effects, can become installed on users’ computers without their consent.

On Monday the parties announced a settlement under which Intermix will pay total monetary relief of $300,000 (including $125,000 of penalties, $50,000 in costs of investigation, and $125,000 in a contribution of computers to local non-profits). Intermix will also assure that third parties cease continued distribution of its software, among other injunctive relief. These penalties are in addition to Intermix’s 2005 $7.5 million settlement with the New York Attorney General.

In the course of this matter, I had occasion to examine my records of past Intermix installations. For example, within my records of installations I personally observed nearly two years ago, I found video evidence of Intermix becoming installed by SecondThought. By all indications, SecondThought’s exploit-based installers placed Intermix onto users’ computers without notice or consent.

Using web pages and installer files found on Archive.org, I also demonstrated that installations on Intermix’s own web sites were remarkably deficient. For example, some Intermix installations disclosed only a portion of the Intermix programs that would become installed, systematically failing to tell users about other programs they would receive if they went forward with installation. Most Intermix installations failed to affirmatively show users their license agreements, instead requiring users to affirmatively click to access the licenses; and in some instances, even when a user did click, the license was presented without scroll bars, such that even a determined user couldn’t read the full license. Furthermore, some Intermix installations claimed a home page change would occur only if a user chose that option (“you can choose to have your default start page reset”), when in fact that change occurred no matter what, without giving users any choice.

Remarkably, I also found evidence of ongoing Intermix installations, despite Intermix’s 2005 promise to “permanently discontinue distribution of its adware, redirect and toolbar programs.” For example, in my testing of October 2006 and again just yesterday, the Battling Bones screensaver (among various others) was still available on Screensavershot.com (a third-party site). Installing Battling Bones gives users Intermix’s Incredifind too. Even worse, this installation proceeds without any disclosure to the user of the Intermix software that would be installed. (Video proof. The installer’s EULA mentions various other programs to be installed, but it never mentions Intermix or the specific Intermix programs that in fact were installed.) Furthermore, I found dozens of “.CAB” installation files still on Intermix’s own web servers — particularly hard to reconcile with Intermix’s claim of having abandoned this business nearly two years months ago. Truly shutting down the business would have entailed deleting all such files from all servers controlled by Intermix.

I continue to think there’s substantial room for litigation against US-based spyware vendors. I continue to see nonconsensual and materially deceptive installations by numerous identifiable US spyware vendors. (For example, I posted a fresh Ask.com nonconsensual toolbar installation just last month. And I see more nonconsensual installations of other US-based vendors’ programs, day in and day out.) These vendors continue to cause substantial harm to the users who receive their unwanted software.


Technology news sites and forums have been abuzz over the FTC’s proposed settlement with Zango, whose advertising software has widely been installed without consent or without informed consent. I commend the FTC’s investigation, and the injunctive terms of the settlement (i.e. what Zango has to do) are appropriately tough. Oddly, Zango claims to have “met or exceeded the key notice and consent standards … since at least January 1, 2006.” I disagree. From what I’ve seen, Zango remains out of compliance to this day. I’m putting together appropriate screenshot and video proof.

Current Ask Toolbar Practices

Last year I documented Ask toolbars installing without consent as well as installing by targeting kids. Ask staff admitted both practices are unacceptable, and Ask promised to make them stop. Unfortunately, Ask has not succeeded.

In today’s post, I report notable current Ask practices. I show Ask ads running on kids sites and in various noxious spyware, specifically contrary to Ask’s prior promises. I document yet another installation of Ask’s toolbar that occurs without user notice or consent. I point out why Ask’s toolbar is inherently objectionable — especially its rearrangement of users’ browsers and its excessive pay-per-click ads to the effective exclusion of ordinary organic links. I compare Ask’s practices with its staff’s promises and with governing law — especially “deceptive door opener” FTC precedent, prohibiting misleading initial statements even where clarified by subsequent statements.

Details:

Current Practices of IAC/Ask Toolbars

False and Deceptive Pay-Per-Click Ads

I present and critique pay-per-click ads that don’t deliver what they promise. I consider implications for search engine revenues, and I analyze legal and ethical duties of advertisers and search engines. I offer a system for others to report similar ads that they find.

Read Google’s voluminous Adwords Content Policy, and you’d think Google is awfully tough on bad ads. If your company sells illegal drugs, makes fake documents, or helps customers cheat drug tests, you can’t advertise at Google. Google also prohibits ads for fireworks, gambling, miracle cures, prostitution, radar detectors, and weapons. What kind of scam could get through rules like these?

As it turns out, lots of pay-per-click advertisers push and exceed the limits of ethical and legal advertising — like selling products that are actually free, or promising their services are “completely free” when they actually carry substantial recurring charges.

In the sections that follow, I flag more than 30 different advertisers’ ads, all bearing claims that seem to violate applicable FTC rules (e.g. on use of the word “free”), or that make claims that are simply false. (All ads were observed on September 15 or later.) I then explain why this problem is substantially Google’s responsibility, and I present evidence suggesting Google’s substantial profits from these scams. Finally, I offer a mechanism for interested users to submit other false or deceptive ads, and I remark on Google’s failure to take action.

Charging for software that’s actually free

One scam Google doesn’t prohibit — and as best I can tell, does nothing to stop — is charging for software that’s actually free. Search for “Skype” and you’ll find half a dozen advertisers offering to sell eBay’s free telephone software. Search for “Kazaa” or “Grokster” and those products are sold too. Even Firefox has been targeted.

Each and every one of these ads includes the claim that the specified product is “free.” (These claims are expressed in ad titles, bodies, and/or display URLs). However, to the best of my knowledge, that claim is false, as applied to each and every ad shown above: The specified products are available from the specified sites only if the user pays a subscription fee.

These ads are particularly galling because, in each example, the specified program is available for free elsewhere on the web, e.g. directly from its developer’s web site. Since these products are free elsewhere, yet cost money at these sites (despite promises to the contrary), these sites offer users a particularly poor value.

Often these sites claim to offer tech support, but that’s also a ruse: Tests confirm there’s no real support.

Although sophisticated users will realize that these sites are bad deals, novice or hurried users may not. These sites bid for top search engine placement — often appearing above search engines’ organic (main) results. Some proportion of users see these prominent ads, click through, and get tricked into paying for these otherwise-free programs. Claiming a refund takes longer than it’s worth to most users. So as a practical matter, a site need only trick each user for an instant in order to receive its fee.

The “completely free” ringtones that aren’t

Ringtone ads often claim to be “free,” “totally free,” “all free,” “100% complimentary,” and available with “no credit card” and “no obligation” required. These claims typically appear in pay-per-click ad bodies, but they also often appear in ad titles and even in ad domain names, of course along with landing pages.

Often, these claims are simply false: An ad does not offer a “totally free” product if it touts a limited free trial followed by an auto-renewing paid service (a negative option plan).

Other claims are materially misleading. For example, claiming “no credit card required ” suggests that no charges will accrue. But that too is false, since ringtone sites generally charge users through cell phone billing systems, unbeknown to many users who believe a service has no way to impose a charge if a user provides no credit card number.

Each and every one of these ads includes the claim that the specified product is “free” (or some other claim substantially similar, e.g. “complimentary”). In most cases, subsequent language attempts to disavow these “free” claims. But in each case, to the best of my knowledge, service is available only if a user enters into a paid relationship (e.g. a paid subscription) — the very opposite of “free.” (Indeed, the subscription requirement applies even to unlimitedringtones.com, despite that ad’s claim that “no subscription [is] required.” The site’s fine print later asserts that by requesting a ringtone registration, a user “acknowledge[s] that [he is] subscribing to our service billed at $9.99 per month” — specifically contrary to site’s earlier “no subscription” promise.)

Vendors would likely defend their sites by claiming that (in general) their introductory offers are free, and by arguing that their fine print adequately discloses users’ subsequent obligations. This is interesting reasoning, but it’s ultimately unconvincing, thanks to clear regulatory duties to the contrary.

The FTC’s Guide Concerning the Use of the Word ‘Free’ is exactly on point. The guide instructs advertisers to use the word “free” (and all words similar in meaning) with “extreme care” “to avoid any possibility that consumers will be misled or deceived.” The guide sets out specific rules as to how and when the word “free” may be used, and it culminates with an incredible provision prohibiting fine print to disclaim what “free” promises. In particular, the rule’s section (c) instructs (emphasis added):

All the terms, conditions and obligations upon which receipt and retention of the ‘Free’ item are contingent should be set forth clearly and conspicuously at the outset of the offer … in close conjunction with the offer of ‘Free’ merchandise or service.

In case that instruction left any doubt, the FTC’s rule continues:

For example, disclosure of the terms of the offer set forth in a footnote of an advertisement to which reference is made by an asterisk or other symbol placed next to the offer, is not regarded as making disclosure at the outset.

Advertisers may not like this rule, but it’s remarkably clear. Under the FTC’s policy, ads simply cannot use a footnote or disclaimer to escape a “free” promise made earlier. Nor can an advertiser promise a “free” offer at an early stage (e.g. a search engine ad), only to impose additional conditions later (such as in a landing page, confirmation page, or other addendum). The initial confusion or deception is too strong to be cured by the subsequent revision.

Advertisers might claim that the prohibited “free” ads at issue come from their affiliates or other partners — that they’re not the advertisers’ fault. But the FTC’s Guide specifically speaks to the special duty of supervising business partners’ promotion of “free” offers. In particular, section (d) requires:

[I]f the supplier knows, or should know, that a ‘Free” offer he is promoting is not being passed on by a reseller, or otherwise is being used by a reseller as an instrumentality for deception, it is improper for the supplier to continue to offer the product as promoted to such reseller. He should take appropriate steps to bring an end to the deception, including the withdrawal of the ‘Free’ offer.

It therefore appears that the ads shown above systematically violate the FTC’s “free” rules. Such ads fail to disclose the applicable conditions at the outset of the offer, as FTC rules require. And even where intermediaries have placed such ads, their involvement offers advertisers no valid defense.

Ads impersonating famous and well-known sites

Some pay-per-click ads affirmatively mislead users about who is advertising and what products are available. Consider the ads below, for site claiming to be (or to offer) Spybot. (Note text in their respective display URLs, shown in green type.) Despite the “Spybot” promise, these sites actually primarily offer other software, not Spybot. (Spybot-home.com includes one small link to Spybot, at the far bottom of its landing page. I could not find any link to the true Spybot site from within www-spybot.net.)

In addition, search engine ads often include listings for sites with names confusingly similar to the sites and products users request. For example, a user searching for “Spybot” often receives ads for SpyWareBot and SpyBoot — entirely different companies with entirely different products. US courts tend to hold that competitive trademark targeting — one company bidding on another company’s marks — is legal, in general. (French courts tend to disagree.) But to date, these cases have never considered the heightened confusion likely when a site goes beyond trademark-targeting and also copies or imitates another company’s name. Representative examples follow. Notice that each ad purports to offer (and is triggered by searches for the name of) a well-known product — but in fact these ads take users to competing vendors.

Google’s responsibility – law, ethics, and incentives

Google would likely blame its advertisers for these dubious ads. But Google’s other advertising policies demonstrate that Google has both the right and the ability to limit the ads shown on its site. Google certainly profits from the ads it is paid to show. Profits plus the right and ability to control yield exactly the requirements for vicarious liability in other areas of the law (e.g. copyright infringement). The FTC’s special “free” rules indicate little tolerance for finger-pointing — even specifically adding liability when “resellers” advertise a product improperly. These general rules provide an initial basis to seek greater efforts from Google.

Crucially, the Lanham Act specifically contemplates injunctive relief against a publisher for distributing false advertising. 15 USC § 1125(a)(1) prohibits false or misleading descriptions of material product characteristics. § 1114 (2) offers injunctive relief (albeit without money damages) where a publisher establishes it is an “innocent infringer.” If facing claims on such a theory, Google would surely attempt to invoke the “innocent infringer” doctrine — but that attempt might well fail, given the scope of the problem, given Google’s failure to stop even flagrant and longstanding violations, and given Google’s failure even to block improper ads specifically brought to its attention. (See e.g. World Wrestling Federation v. Posters, Inc., 2000 WL 1409831, holding that a publisher is not an innocent infringer if it “recklessly disregard[s] a high probability” of infringing others’ marks.)

Nonetheless, the Communications Decency Act’s 47 USC § 230(c)(1) potentially offers Google a remarkable protection: CDA § 230 instructs that Google, as a provider of an interactive computer service, may not be treated as the publisher of content others provide through that service. Even if a printed publication would face liability for printing the same ads Google shows, CDA § 230 may let Google distribute such ads online with impunity. From my perspective, that would be an improper result — bad policy in CDA § 230’s overbroad grant of immunity. A 2000 DOJ study seems to share my view, specifically concluding that “substantive regulation … should, as a rule, apply in the same way to conduct in the cyberworld as it does to conduct in the physical world.” But in CDA § 230, Congress seems to have chosen a different approach.

That said, CDA § 230’s reach is limited by its exception for intellectual property laws. § 230(e)(2) provides that intellectual property laws are not affected by § 230(c)(1)’s protection. False advertising prohibitions are codified within the Lanham Act (an intellectual property statute), offering a potential argument that CDA § 230 does not block false advertising claims. This argument is worth pursuing, and it might well prevail. But § 230 cases indicate repeated successes for defendants attempting to escape liability on a variety of fact patterns and legal theories. On balance, I cannot confidently predict the result of litigation attempting to hold Google responsible for the ads it shows. As a practical matter, it’s unclear whether or when this question will be answered in court. Certainly no one has attempted such a suit to date.

Notwithstanding Google’s possible legal defenses, I think Google ought to do more to make ads safe as a matter of ethics. Google created this mess — by making it so easy for all companies, even scammers, to buy Internet advertising. So Google faces a special duty to help clean up the resulting problems. Google already takes steps to avoid sending users to web sites with security exploits, and Google already refuses ads in various substantive categories deemed off-limits. These scams are equally noxious — directly taking users’ money under false pretenses. And Google’s relationship with these sites is particularly unsavory since Google directly and substantially profits from their practices, as detailed in the next section.

Even self-interest ought to push Google to do more here. Google may make an easy profit now by selling ads to scammers. But in the long run, rip-off ads discourage users from clicking on Google’s sponsored links — potentially undermining Google’s primary revenue source.

Who really profits from rip-off ads?

When users suffer from scams like those described above, users’ money goes to scammers, in the first instance. But each scammer must pay Google whenever a user clicks its ad. So Google profits from scammers’ activities. If the scammers ceased operations — voluntarily, or because Google cut off their traffic — Google’s short-run revenues would decrease.

Users
service fees
   Scammers   
advertising fees
Google
How Google Profits from Scammers

Consider the business model of rogue web sites “selling” software like Skype. They have one source of revenue — users buying these programs. Their expenses tend to be low: they provide no substantial customer service, and often they link to downloads hosted elsewhere to avoid even incurring bandwidth costs. It seems the main expense of such sites is advertising — with pay-per-click ads from Google by all indications a primary component. The diagram at right shows the basic money trail: From users to scam advertisers to Google. When users are ripped off by scammers, at least some of the payment flows through to Google.

How much of users’ payments goes to Google, rather than being retained by scammers? My academic economics research offers some insight. Recall that search engine ads are sold through a complicated multi-unit second-price auction: Each advertiser’s payment is determined by the bid of the price of the advertiser below him. Many equilibria are possible, but my recent paper with Michael Ostrovsky and Michael Schwarz offers one outcome we think is reasonable — an explicit formula for each advertiser’s equilibrium bid as a function of its value (per click) and of others’ bids. In subsequent simulations (article forthcoming), Schwarz and I will demonstrate the useful properties of this bidding rule — that it dominates most other strategies under very general conditions. So there’s good reason to think markets might actually end up in this equilibrium, or one close to it. If so, we need only know advertisers’ valuations (which we can simulate from an appropriate distribution) to compute market outcomes (like advertiser profits and search engine revenues).

One clear result of my recent bidding simulations: When advertisers have similar valuations (as these advertisers do), they tend to “bid away” their surpluses. That is, they bid almost as much as a click is worth to them — so they earn low profits, while search engines reap high revenues. When a user pays such an advertiser, it wouldn’t be surprising if the majority of that advertiser’s gross profit flowed through to Google.

A specific example helps clarify my result. Consider a user who pays $38 to Freedownloadhq.com for a “free” copy of Skype. But Freedownloadhq also received, say, 37 other clicks from 37 other users who left the site without making a purchase. Freedownloadhq therefore computes its valuation per click (its expected gross profit per incoming visitor) to be $1. The other 10 advertisers for “Skype” use a similar business model, yielding similar valuations. They bid against each other, rationally comparing the benefits off high traffic volume (if they bid high to get top placement at Google) against the resulting higher costs (hence lower profits). In equilibrium, simulations report, with 10 bidders and 20% standard deviation in valuations (relative to valuation levels), Google will get 71% of advertisers’ expected gross profit. So of the user’s $38, fully $27 flows to Google. Even if Freedownloadhq’s business includes some marginal costs (e.g. credit card processing fees), Google will still get the same proportion of gross profit.

One need not believe my simulation results, and all the economic reasoning behind them, in order to credit the underlying result: That when an auctioneer sells to bidders with similar valuations, the bidders tend to bid close together — giving the auctioneer high revenues, but leaving bidders with low profits. And the implications are striking: For every user who pays Freedownloadhq, much of the user’s money actually goes to Google.

In January I estimated that Google and Yahoo make $2 million per year on ads for “screensavers” that ultimately give users spyware. Add in all the other terms with dubious ads — all the ringtone ads, the for-free software downloads, ads making false statements of product origin, and various other scams — and I wouldn’t be surprised if the payments at issue total one to two orders of magnitude higher.

Towards a solution

Some of these practices have been improving. For example, six months ago almost all “ringtones” ads claimed to be “free,” but today some ringtones ads omit such claims (even while other ads still include these false statements).

Recent changes in Google pricing rules seem to discourage some of the advertisers who place ads of the sort set out above. Google has increased its pricing to certain advertisers, based on Google’s assessment of their “low quality user experience.” But the specific details of Google’s rules remain unknown. And plenty of scam ads — including all those set out above — have remained on Google’s site well after the most recent round of rule changes. (All ads shown above were received on September 15, 2006, or later.)

Google already has systems in place to enforce its Adwords Content Policy. My core suggestion for Google: Expand that policy to prevent these scams — for example, explicitly prohibiting ads that claim a product is “free” when it isn’t, and explicitly prohibiting charging users for software that’s actually free. Then monitor ads for words like “free” and “complimentary” that are particularly likely to be associated with violations. When a bad ad is found, disable it, and investigate other ads from that advertiser.

To track and present more dubious ads, I have developed a system whereby interested users can submit ads they consider misleading for the general reasons set out above. Submit an ad or view others’ submissions.

These problems generally affect other search engines too — Yahoo, MSN, and Ask.com, among others. But as the largest search engine, and as a self-proclaimed leader on ethics issues, I look to Google first and foremost for leadership and improvement.

Google’s (Non-)Response

When Information Week requested a comment from Google as to the ads I reported, Google responded as follows:

When we become aware of deceptive ads, we take them down. … We will review the ads referenced in this report, and remove them if they do not adhere to our guidelines.

A week later, these ads remain available. So Google must have concluded that these ads are not deceptive (or else Google would have “take[n] them down” as its first sentence promised). And Google must have concluded that these ads do adhere to applicable Google policies, or else Google would have “remove[d] them” (per its second sentence).

Google’s inaction exactly confirms my allegation: That Google’s ad policies are inadequate to protect users from outright scams, even when these scams are specifically brought to Google’s attention.

All identifications and characterizations have been made to the best of my ability. Any errors or alleged errors may be brought to my attention by email.

I thank Rebecca Tushnet for helpful discussions on the legal duties of advertisers and search engines.

StatCounter - Free Web Tracker and Counter

Originally posted October 9, 2006. Last Updated: October 16, 2006.

PPC Ads, Misleading and Worse

Read Google’s voluminous Adwords Content Policy, and you’d think Google is awfully tough on bad ads. If your company sells illegal drugs, makes fake documents, or helps customers cheat drug tests, you can’t advertise at Google. Google also prohibits ads for fireworks, gambling, miracle cures, prostitution, radar detectors, and weapons. What kind of scam could get through rules like these?

As it turns out, lots of pay-per-click advertisers push and exceed the limits of ethical and legal advertising — like selling products that are actually free, or promising their services are “completely free” when they actually carry substantial recurring charges. For example, the ad at right claims to offer “100% complimentary” and “free” ringtones, when actually the site promotes a services that costs approximately $120 per year.

 


An example misleading ad, falsely claiming ringtones are An example misleading ad, falsely claiming ringtones are “complimentary” when they actualy carry a monthly fee.

In today’s article, I show more than 30 different advertisers’ ads, all bearing claims that seem to violate applicable FTC rules (e.g. on use of the word “free”), or that make claims that are simply false. I then analyze the legal and ethical principles that might require search engines to remove these ads. Finally, I offer a mechanism for interested users to submit other false or deceptive ads they find.

Details:

False and Deceptive Pay-Per-Click Ads

Certifications and Site Trustworthiness

When a stranger promises “you can trust me,” most people know to be extra vigilant. What conclusion should users draw when a web site touts a seal proclaiming its trustworthiness? Some sites that are widely regarded as extremely trustworthy present such seals. But those same seals feature prominently on sites that seek to scam users — whether through spyware infections, spam, or other unsavory practices.

It’s no great surprise that bad actors seek to free-ride on sites users rightly trust. Suppose users have seen a seal on dozens of sites that turn out to be legitimate. Dubious sites can present that same seal to encourage more users to buy, register, or download.

But certification issuers don’t have to let this happen. They could develop and enforce tough rules, so that every site showing a seal is a site users aren’t likely to regret visiting. Unfortunately, certification don’t always live up to this ideal. Writing tough rules isn’t easy, and enforcing them is even harder. Hard-hitting rules are particularly unlikely when certification authorities get paid for each certification they issue — but get nothing for rejecting an applicant.

Today I’m posting Adverse Selection in Online “Trust” Authorities, an empirical look at the best-known certification authority, TRUSTe. I cross-reference TRUSTe’s ratings with the findings of SiteAdvisor — where robots check web site downloads for spyware, and submit single-use addresses into email forms to check for spam, among other automated and manual tests. Of course SiteAdvisor data isn’t perfect either, but if SiteAdvisor says a site is bad news, while TRUSTe gives it a seal, most users are likely to side with SiteAdvisor. (Full disclosure: I’m on SiteAdvisor’s advisory board. But SiteAdvisor’s methodology speaks for itself.)

(update, July 2009: I have posted a revised version of Adverse Selection in Online “Trust” Authorities, as published in the Proceedings of ICEC’09)

What do I find? In short, nothing good. I examine a sampling of 500,000+ top web sites, as reported by a major ISP. Of the sites certified by TRUSTe, 5.4% are untrustworthy according to SiteAdvisor’s data, compared with just 2.5% untrustworthy sites in the rest of the ISP’s list. So TRUSTe-certified sites are more than twice as likely to be untrustworthy. This result also holds in a regression framework controlling for site popularity (traffic rank) and even a basic notion of site type.

Particularly persuasive are some specific sites TRUSTe has certified as trustworthy, although in my experience typical users would disagree. I specifically call out four sites certified by TRUSTe as of January 2006:

  • Direct-revenue.com – Makes advertising software known to become installed without consent. Tracks what web sites users visit, and shows pop-up ads. Historically, blocks many attempts at removal, automatically reinstalls itself, and deletes certain other programs from users’ PCs. Faces litigation by the New York Attorney General plus consumer class actions.
  • Funwebproducts.com – This site, among other Ask.com toolbar distribution points, installs a toolbar into users’ web browsers when users install smileys, screensavers, cursors, or other trinkets. Moves a user’s Address Bar to the right side of the browser, such that typing an address into the standard top-left box performs a search rather than a direct navigation. Promotes its toolbar in ads shown by other vendors’ spyware.
  • Maxmoolah.com – Offers users “free” gifts if they complete numerous sequential partner offers. Privacy policy allows sharing of user’ email addresses and other information with third parties. In testing, providing an email address to Maxmoolah.com yielded a total of 485 distinct e-mails per week, from a wide variety of senders.
  • Webhancer.com – Makes online tracking software, which I have personally observed is often installed without consent. Monitors what web sites users visit, and sends this information to Webhancer’s servers.

This is an academic article — ultimately likely to be a portion of my Ph.D. dissertation. So it’s mathematical in places where that’s likely to be helpful (to some readers, at least), and it’s not as accessible as most of my work. But for those who are concerned about online safety, it may be worth a read. Feedback welcomed.


In its response to my article, TRUSTe points out that Direct Revenue and Maxmoolah no longer hold TRUSTe certifications. True. But Maxmoolah was certified for 13+ months (from February 2005 through at least March 2006), and Direct Revenue was certified for at least 8 months (from April 2005 or earlier, through at least January 2006). These companies’ practices were bad all along. TRUSTe need not have certified them in the first place.

TRUSTe then claims that its own web site made an “error” in listing FunWebProducts as a member. TRUSTe does not elaborate as to how it made so fundamental a mistake — reporting that a site has been certified when it has not. TRUSTe’s FunWebProducts error was compounded by the apparent additional inclusion of numerous other near-identical Ask.com properties (Cursormania, Funbuddyicons, Historyswatter, Mymailstationery, Smileycentral, Popularscreensavers). TRUSTe’s error is particularly troubling because at least some of the erroneously-listed sites were listed as certified for 17 months or longer (from May 2005 or earlier, through at least September 12, when Google last crawled TRUSTe’s member list).

As to Webhancer, TRUSTe claims further tests (part of TRUSTe’s Trusted Download program) will confirm the company’s practices. But that’s little benefit to consumers who currently see Webhancer’s seal and mistakenly conclude TRUSTe has already conducted an appropriate review of Webhancer’s products, when in fact it has not. Meanwhile, I have personally repeatedly observed Webhancer’s bad installation practices day in and day out — including widespread nonconsensual installations by the notorious Dollar Revenue, among others. These observations are trivial to reproduce, yet Webhancer remains a TRUSTe certificate holder to this day.

Consumers deserve certifications that are correctly issued in the first place — not merely revoked after months or years of notorious misbehavior, and not mistakenly listed as having been issued when in fact they were not. TRUSTe is wrong to focus on the few specific examples I chose to highlight. The problem with TRUSTe’s approach is more systemic, as indicated by the many other dubious TRUSTe-certified sites analyzed in my dataset but not called out by name in my paper or appendix.

Consider some of the other unsavory sites TRUSTe has certified:

  • TRUSTe certifies numerous sites that most users would call spammers — like focalex.com (which sends users 320+ emails per week, in SiteAdvisor’s tests), yourgiftcards.com (147 emails per week), and everyfreegift.com (86). All three of these sites remain TRUSTe members listed on TRUSTe’s current member list.
  • TRUSTe continues to certify freecreditreport.com, which offers a “free” credit report that actually costs users $12.95/month if they don’t remember to cancel — a practice so misleading it prompted FTC litigation.
  • TRUSTe has certified Hotbar (now owned by 180solutions) and Hotbar’s Wowpapers.com site — advertising software that tracks users’ browsing and shows extra pop-ups.
  • In January 2005, mere days after I reported eZula’s advertising software becoming installed without consent, TRUSTe’s newsletter specifically touted its certification of eZula.
  • TRUSTe even certified Gratis Internet, which was revealed to have sold 7.2 million users’ names, email addresses, home phone numbers, and street addresses, in specific violation of its privacy policy.

TRUSTe’s response claims that my conclusions somehow reflect SiteAdvisor idiosyncrasies. I disagree. I can’t imagine any reasonable, informed consumer wanting to do business with sites like these. TRUSTe can do better, and in the future, I hope it will.


I’m sometimes asked where I’m headed, personally and professionally. Posting a new academic article offers an appropriate occasion to explain. I’m still working on my economics Ph.D., having drafted several papers about pay-per-click advertising (bidding strategies, efficiency, revenue comparisons), with more in the pipeline. After that? An academic job might be a good fit, though that’s not the only option. Here too, I’d welcome suggestions.

Which Anti-Spyware Programs Delete Which Cookies?

I’ve always been puzzled by the divergent attitudes of anti-spyware programs towards advertising cookies. Some anti-spyware programs take their criticism to the extreme, with terms like “spy cookies” and serious overstatements of the alleged harm from cookies. Others ignore cookies altogether. In between are some interesting alternatives — like ignoring cookies by default (but with optional detection), giving users an easy way to hide cookie detections, and flagging cookies as “low risk” detections.

I understand why some users are concerned about cookies. It’s odd and, at first, surprising that “just” visiting a web site can deposit files on a user’s hard disk. Cookies are often hard or impossible to read by hand, and ad networks’ cookies offer user no direct benefit.

Unrequested arrival, no benefit to users — sounds a lot like spyware? So say some, including the distinguished Walt Mossberg. But that’s actually not my view. Unlike the spyware I focus on, cookies don’t interrupt users with extra ads, don’t slow users’ PCs, can’t crash, and require only trivial bandwidth, memory, and CPU time.

Cookies do have some privacy consequences — especially when they integrate users’ behavior on multiple sites. But such tracking only occurs to the extent that the respective sites allow it — an important check on the scope of such practices. That’s not to say shared cookies can’t be objectionable, but to my eye these concerns are small compared with more pressing threats to online privacy (like search engine data retention). Plus, ad networks usually address privacy worries through privacy policies limiting how users’ data may be used.

All in all, I don’t think cookies raise many serious concern for typical users. Still, I know and respect others who hold contrary views. It seems reasonable people can disagree on this issue, especially on the harder cases posed by certain shared cookies.

Earlier this summer, Vinny Lingham and Clicks2Customers asked me to test the current state of cookie detections by major anti-spyware programs. They had noticed that for those anti-spyware programs that detect cookies, not all cookies are equally affected. Which cookies are most affected? By which anti-spyware programs? I ran tests to see — forming a suite of cookies, then scanning them with the leading anti-spyware programs.

Vinny is generously letting me share my results with others who are interested. The details:

Cookies Detected by Anti-Spyware Programs: The Current Status

See also Vinny’s introduction and commentary.