AppEsteem Blog

They say you gave informed consent (but did you really?)

We've recently noticed that some bundlers are helping apps with sneaky (and insufficient) ways to obtain what they claim is informed user consent.

Here's an example. Let's say you're installing a game on your PC. Once the game is installed, you get an offer for a free super cool web browser. In this case, the game installer is considered a "bundler", and if you do install that web browser, that company will pay the game vendor.

Now imagine that, in the fine print, the offer claims that your acceptance means that you agree to set the super cool web browser to be your default browser. Here's a pic of just such an offer for Opera while the consumer was installing Recuva:

The bundler may claim that you gave informed consent to change your default browser setting, but they're wrong. That's because only the app that needs your informed consent may obtain it. In the above example, Opera (and not the Recuva installer) needs to obtain informed user consent to set itself as your default browser.

It isn't hard for Opera to obtain your informed consent. They can ask you after they're installed if you want them to be your default web browser, or they can ask you during their install (instead of installing silently).

So hopefully it's clear that bundlers cannot obtain your informed consent for anything, including changing default programs or search, lowering your security posture, or transmitting sensitive information about you. If we see a bundler trying to trick consumers into thinking that they gave their informed consent, we'll take the following steps:

  1. We'll call out the bundler as an active Deceptor for failing ACR-014, which requires that an app "Is truthful and not misleading or confusing with the intent to deceive; can be substantiated; is not unfair."
  2. We'll call out any third party ad network (in this case, it was Rise making the offer), who knows better than to try to trick the consumer into granting insufficient consent.
  3. We'll call out the app that claims other apps and websites obtained informed user consent on their behalf. It's the job of the app to obtain informed user consent, not to ask another program or website to get it for them.

 

 

Not yours! Consumer-linkable information belongs to the consumer.

Imagine you walk into my bookshop. As you peruse my books, I take note of where you pause and what you pick up. I ask you questions about what you're looking for, and I make suggestions. I'm hoping that the more I get to know you, the better I can serve you, and the better chance that you'll become a regular.

My imaginary bookshop is small, and my landlord offered me a free security system. It uses video cameras focused on my aisles, and it sends the video feed to the cloud alerting me when it thinks somebody steals a book.

Now imagine that once you leave my bookshop, I take what I learn about you and turn that into extra money for me: I sell your wish list to others, and I get advertisers to pay me to target you based on your book interests. Also, that free security system let my landlord do the same, based on what else it learned about you while analyzing the security tapes.

My imaginary bookshop sounds both big brother-ish and unfair to my customers. Fortunately, it would be hard for me to run this kind of bookshop, because I'd have to put a big notice on my door that says something like this, "Attention: your entry into this bookshop is your consent to being tracked and targeted for future advertising, both by me and by my landlord, who in return has provided us a free security system. Scan here to see the full privacy policy."

My guess is that if I put that kind of disclosure on my imaginary bookshop, I wouldn't get many customers coming through my door. My bookshop would be rather empty, because although customers are happy if I use data linked to them to give them a better shopping experience, they would not be happy if I was using their information to monetize or trade or sell. The reason? Because any consumer-linkable information I collect belongs to the consumer, not to me.

This makes sense in the physical world, but somehow the Internet doesn't work this way. It should, but we're not there yet.

Somehow the Internet has convinced us that the price of better searches and entertainment only comes when others are allowed to trade, sell, or monetize the information that can be linked back to us. That's not fair, because your non-public information belongs to you, and, except for well-regulated and limited scenarios, nobody else should have the right to sell it, or use it to make money, or permit others to gather it from you.

In what kinds of scenarios can making a business out of consumer-linkable information be acceptable? Two examples come to mind: medical information and credit scores. Unlike the Internet, both medical information and credit scores involve well-regulated industries that have been built up around controlling how this information is protected, limited in its usage, and used appropriately. Strong consumer-focused laws make it clear that you own your data, and they provide protection against misuse. Your consent must be obtained, and only for specific use cases. We think these examples provide good models for how consumer-linkable information collected on the Internet should be handled.

Just because a company knows something personal about you, they don't have the right to sell or make money from others or trade it to somebody else. And they don't have the right to let others collect more linkable information about you. If a company does any of this, especially online while you visit its websites or use its apps, that company is a polluter, and they should be stopped.

Here's just a few examples of how a website or app can pollute your internet experience and take advantage of your linkable information:

  • Google and Bing give you free searches in exchange for learning what you search for and click on, then use your information to charge advertisers to target you while you search.
  • Taboola and Outbrain pay more ad revenue to news websites who let them track and target you based on what they learn about your interests.
  • Google gives tens of millions of websites free analytics in exchange for collecting data on what you do on those websites.
  • Meta, Google, Microsoft, Yahoo, and so many other ad networks have huge businesses based on them selling advertisers access to your information so you can be better targeted.

All of the above examples are unacceptable, because they're all trading in something that doesn't belong to them: information linkable back to you. Only polluters aggregate other people's linkable information and try to turn it into money. Only polluters claim that you gave them consent to resell and repackage and build businesses around your linkable information.

Meanwhile, the privacy debate swirling around various governments seems to be focused on how consumers can request to see what linkable information companies are exploiting, and whether they can request to be forgotten. There is talk about "do not sell" flags in browsers that websites and ad networks can voluntarily consume, but without any teeth behind them. We think these initiatives are missing the basic point: no company should consider consumer-linkable information theirs to resell, to trade, or to monetize outside of providing better and relevant first-party experiences directly back to the consumer.

The BigTech companies who grow their businesses by exploiting consumer-linkable information have been successful in fending off regulation of how they purchase, aggregate, use, and monetize this information. This needs to change.

This is why we've added two new polluter indicators, focused on customer-linkable information, to our list. AP-10 says companies can collect and use consumer-linkable information to improve their direct services, but they can't use it to sell, monetize, or improve third-party services. AP-11 says you can't let others collect consumer-linkable information on your site or in your apps.

It's time to call out this exploitative behavior for what it is: internet pollution. It's time for it to stop.

 

Why Should You Have an Ad-Blocker?

Unfortunately, browsing the web can be dangerous due to the many potential threats lurking in cyberspace. Hackers and scammers steal personal information like passwords and bank accounts, and they infect your devices with malware and unwanted software. Many websites contain inappropriate content that can be accessed by minors - exposing them to explicit material they may lead them to interact with online predators.

But outside of the obvious threats that you already be aware of, new threats have emerged from the countless advertisements we face online. 

What is Ad-Pollution?

Ad pollution is how we refer to the unfair digital advertisements that bombard us while we're online. This occurs on websites, social media platforms, and other digital outlets. 

In best-case scenarios, ad pollution disrupts your overall media consumption experience with slower loading times clogged-up feeds. 

In worst-case scenarios, ad pollution delivers you into the hands of opportunistic cybercriminals by impersonating brands and directing users to malicious sites that host ransomware or steal your login credentials and other sensitive financial information. This technique is referred to as malvertising.

The Rise of Malvertising on Major Search Engines 

Malvertising attackers buy ads on legitimate search engines and advertising networks on popular websites, including video streaming sites, news sites, blogs, and more. The ads lure you to download unwanted software, or to run malicious code.

Malvertising campaigns are designed to be hard to detect and often use the latest technology in order to stay ahead of security measures. By reaching the web pages through native ads and tricking you that they’re just other content on the page that is safe to click, criminals can steal your personal information such as credit card details or login credentials. They may also redirect users to phishing websites or install malicious software onto a user's computer without them realizing.

For this reason, it's important for you to be aware of the risks associated with malvertising and take steps to protect yourself. The easiest way to protect yourself from the dangers associated with malvertising and the annoyances associated with ad pollution is to install a reputable ad-blocking app. 

What is an Ad-Blocker? 

Ad-blocking software is designed to block ads, especially ad pollution, from appearing on websites. It works by detecting and preventing the loading of online advertisements, including those in pop-up windows, banner ads, streaming audio or video ads, auto play video ads, and more.

Ad-blocking technology can be found as an add-on or extension for popular web browsers like Chrome, Edge, and Firefox. It also exists as a standalone application that works with any browser. By blocking these intrusive advertisements, users are able to browse the internet without being inundated with unwanted content. 

Additionally, ad-blocking technology can help protect user privacy while browsing online by blocking third party cookies or tracking scripts used by advertisers to track user activity across different sites. Overall, ad-blocking software is a great way for users to reclaim control over their online experience.

The FBI Recommends the use of Ad-Blocking Software 

For most of the same reasons listed above, last fall the FBI formally recommended the use of Ad Blocking software to protect internet users from malicious online advertisements, particularly the kind which can be used to spread malware and viruses that can damage your computer or steal personal information.

The FBI's public service announcement, titled “Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users,” recommended that individuals, “Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.”

By blocking ads, users are also protecting themselves from data mining practices by companies who use online ads to track user behavior in order to target advertising more effectively. Earlier this year, social media giant Meta (Facebook, WhatsApp, Instagram, Messenger) was fined 390 million Euro for violating the General Data Protection Regulation (GDPR).

In that case, the courts said that Meta failed to protect user privacy and collected large amounts of personal data without obtaining permission from their customers. This allowed them to gain access to sensitive information such as age, gender, and political views that would be difficult for any other company to acquire through traditional methods. And it made it easier for attackers to trick you with their malvertising.

Ad-Blocking software provides a layer of protection against these threats and helps you maintain your privacy while browsing the internet. With online scams netting cybercriminals billions of dollars yearly, it is important for you to take steps towards safeguarding your information when surfing the web, and Ad Blocking software is one of the easiest and most effective ways to do this.

 

ACR-013 is coming soon - are you ready?

Starting on April 1, when we find apps that violate ACR-013, we'll list them as active Deceptors. If you're an app vendor who makes offers to install additional software, or you're a third-party offer provider, this blog post is for you.

We think that app monetization through making offers for other apps is legitimate, and it should be allowed. That being said, we have identified a key scenario when we believe it's almost impossible for consumers to distinguish the acceptance of another offer from the work that they're already performing. ACR-013 will be violated when certain kinds of offers are made during this key scenario.

 

ACR-013 only targets silent-install offers for unrelated software that interrupt committed user acquisition workflows without first obtaining consent. But that's quite a mouthful, so you can use this rubric to unpack it and see if ACR-013 applies to your offer:

 

  1. Was the offer presented during an committed user workflow, such as during the consumer answering the install questions? If not, then ACR-013 doesn't apply to your offer.
  2. Was the offer presented in a way that demanded the consumer to answer or wait to continue? If not, then ACR-013 doesn't apply to your offer.
  3. Will acceptance of the offer result in the silent installation of another app? If not, then ACR-013 doesn't apply to your offer.
  4. Is the offer related or essential to your app? If it is, then ACR-013 doesn't apply to your offer.
  5. Immediately prior to making the offer, did you obtain explicit consent to do so? If you did, then ACR-013 doesn't apply to your offer.  (A big thank you to Keren and Itay at CSA for helping us understand the importance of this scenario -- we added this after their feedback)

 

Restating this with references to the rubric: ACR-013 only targets silent-install offers (#3) for unrelated software (#4) that interrupt (#2) committed user acquisition workflows (#1) without first obtaining consent (#5).

 

 

 

So what to do if you think your offer is violating ACR-013? We suggest the following remedies (this isn't an exhaustive list, but it may spark your own creativity):

  1. Consider moving your offer out of the committed user workflow. For instance, if the offer was inside the install, you could place it at the end, after the consumer knows the installation was complete.
  2. Consider making your offer without requiring a response from the user to continue the workflow.
  3. Upon acceptance, consider not silently installing the app, but instead show a landing page, or launch an interactive install
  4. Consider showing how the offer is for software essential or related to your app.
  5. Consider obtaining explicit consent for showing the offer. 

In our studies of the software monetization industry, we've identified many application vendors who already comply with non-ACR-013-violating offers for software. But because we understand that this change may be disruptive to some application vendors, and also to some third-party offer providers, we've spent several months informing the industry, answering their questions, listening to the AVs, and improving on the ACR to be sure we get it right. We "froze" the words in ACR-013 in January, and since then we have been working on spreading the word throughout the industry.

We're grateful to the support we've received from CleanApps.org for hosting a webinar, and to CSA for distributing to their constituents, collecting feedback, and holding a roundtable discussion.

Looking forward, we're confident that this ACR will result in a significant reduction in the consumer dissatisfaction that comes from them realizing that they inadvertently ended up with additional software installed, while preserving the ability for app vendors to monetize through offers.

 

2022 Review, and What's Coming Next

2022 was a busy year for AppEsteem. Here's what we accomplished:

  1. We moved forward with two new ACRs that affect the bundler industry: ACR-013 (which prevents interrupting install/uninstall/conversion with un-consented offers), and ACR-060 (which requires offers to disclose the offering network). These two ACRs are meant to reduce consumer confusion and dissatisfaction, and they will go into effect on April 1. You can read more about them, including the requirement, our intent, and our guidance, on our Requirements checklist.
  2. We took a stand against ad pollution by publishing pollution indicators, and publicly calling out our first set of ad polluters.
  3. We updated our browser safety consumer apps and services (available on Browse.live) by releasing Browse.live Ad Control, a free browser extension that blocks ads from ad polluters, and Browse.live Search, an ad-pollution-free, anonymous search engine.
  4. We called out hundreds of active Deceptors, and we certified hundreds of clean, consumer-respecting apps.
  5. We ran monthly tests against the main AV products to determine how good they were at blocking Deceptors and allowing certified apps.

Not bad for a year where we're mostly still working from home and postponing almost all customer visits.

In 2023, our mission won't change. We'll continue to help clean apps thrive by finding ways to protect consumers from getting tricked, scared, or fooled. Here's what we plan to focus on:

  1. We'll start enforcing the bundler ACRs (ACR-013 and ACR-060). We'll work to stop apps from violating these ACRs, including hunting for them, reaching out to them, and listing them on our active Deceptor list.
  2. We'll keep calling out Deceptors and Ad Polluters, so we can get them to clean up.
  3. We'll continue to expand our Browse.live consumer safety product line so that consumers can have safer and cleaner internet experiences.
  4. We'll look for more ways to encourage the AVs to better protect consumers, both on the system and in the browser. We'll do this with our feeds, our testing, our technology, and with releasing our own apps.

We're winning the fight against deceptive apps, and our clean ecosystem makes this possible. Thank you to our app makers who get their clean apps certified, our AV partners who use our Deceptor and Certified feeds to protect consumers, and our customers who use our Browse.live apps to make their internet experiences safer and cleaner.

Happy New Year from all of us at AppEsteem!

 

Security-reducing apps: a call to action

(Hong Jia and Dennis Batchelder)

We think that many AVs need to update their (potentially) unwanted software policies to make sure they can block apps that reduce security without first obtaining informed user consent. We gave a talk yesterday at AVAR 2022 in Singapore to make our case, show which AVs are currently struggling with protecting their customers against these apps, and ask them to update their policies so their customers can be better protected.

You can see the slides we used for the presentation here.

This was our abstract:

As Avs get better operationalized in their fight against unwanted software (UwS), their combined pressure is driving the software monetization industry toward finding the gaps in AV policies so they can continue to exploit consumers for easy money.

The big gap in AV policies these days, unfortunately, is around apps that make their computers more vulnerable to attacks. The result? A proliferation of apps that needlessly reduce their customers’ security postures and set them up for future attacks, without first obtaining informed user consent. Examples of these apps include VPNs that install self-signed trusted root certificates and free apps that monetize by installing proxies that share their internet connection and processor.

Lately these security-reducing apps that don’t obtain informed consent are grabbing public attention: articles about them are popping up in both security blogs and computer industry news. Some platforms and AVs are beginning to respond – they detect after others have called them out. But the platforms and AVs have been slow to update their policies, and slow to detect these apps as UwS, which leaves a gap that software monetizers continue to exploit.

Our session will show examples of how these apps reduce their customers’ security postures. We will highlight the platform and AV public policy gaps that have led to the spread of them. We’ll make suggestions as to how Avs can enhance their policies to better protect their customers from these apps.

Redefining the fight against Ad Pollution

(Dennis Batchelder and Hong Jia - 3 November 2022)

First, we’ll start with the bad news. Unfair advertising pollutes every consumer’s browsing experience, and it sure seems like an impossible problem to solve. Who’s going to tell the big tech giants what to do, anyway? They control most browsers, search engines, and advertising, and they own many of the most popular websites – who can stand up to that?

But, as you might have expected, we have good news for you. Today we launched a way to fight ad pollution. It uses the same successful model we developed in our fight against unfair software apps, so we know it can work. You can read about the launch in our press release here.

About the model we’re using: we designed it around our belief that the best way to drive change in the software monetizer ecosystem was to split apps into three buckets: unwanted, potentially unwanted, and clean. After many pivots, trials, and assistance, we landed on three key initiatives:

  1. Define (and get them accepted) the rules for what makes an app unwanted.
  2. Raise urgency by publicly calling out the unwanted apps (we called them Deceptors).
  3. Work with the security industry to develop well-defined rules for what makes an app clean.

Our model and these initiatives worked well against unfair apps. Now we’re going to apply it to unfair advertising, Today, we announced the following:

  1. The publication of our first set of Polluter Indicators – rules that identify the kinds of unfair advertising practices that hurt consumers and ruin their browsing experiences.
  2. The release of our first set of ad-tech companies that we consider Ad Polluters, and a public call to block their ads.
  3. A partnership program with the security and ad-blocking ecosystems to accelerate the fight against ad pollution. 

It’s a bit scary for our tiny company to call out these big names. But we’re convinced that their ad practices are both disrespectful and unfair to consumers, and that they need to change their behavior. We want the fight against ad pollution to extend beyond a theoretical privacy discussion; we want these companies held accountable for the mess they’ve made of internet browsing experiences. And we want them to change their ways, just like the software monetization industry has done.

Big tech wants us to believe that market forces alone can drive them to self-regulate, but looking at the awful state of ads today, we know this isn’t true. We see that as big tech dominates, the power of the consumer voice diminishes. Our fight against ad pollution is our attempt to bring the consumer perspective back to say which harmful and annoying advertising behaviors must be stopped.

Over the coming months we’ll update our Polluter list, announce browser tools that will automatically block just the ad polluters, give advice for how we think ad polluters should adjust their strategies, and work with partners to build up a strong enforcement ecosystem. We’d love to hear from you about our effort, or have you join us in the fight against ad pollution, or listen to your ideas on how we can improve. 

Thanks, please help us spread the word, and stay tuned!

Stop Interrupting! New ACR-013 goes live in April 2023

In April 2023, we'll start enforcing ACR-013 (you can find it on our ACR checklist), which is our attempt to help the software monetization industry get over their habit of making unrelated offers during critical moments of the software experience.

We want the industry to change their interrupting behavior, because we know that consumers are tricked into thinking that these kinds of offers and ads are part of the app, and that their acceptance is required. We don't want consumers to be tricked, so we worked with the anti-malware ecosystem and the platforms to come up with this new Deceptor-level requirement to stop this behavior.

Here's an real-life example: during an install of the Opera web browser, as the user clicks through their EULA and privacy policy acceptance and choosing their settings, their install experience is interrupted with an offer for an unrelated app (in this case, it was an offer for Hotspot Shield):

We believe that this interrupting behavior, especially when it occurs during an app's acquisition workflows (like install, upgrade, uninstall, purchase), is misleading, deceptive, and unwanted. Between now and the end of March, we'll work on notifying the software monetization industry about this change, including answering any questions they may have. Starting next April, we'll enforce this ACR: we won't certify apps that break it, and, at our discretion, we'll list apps breaking it on our active Deceptor list.

The ACR Checklist link above shows the exact requirement, the intent behind the requirement, and some practical guidance for how to be in compliance with it. If you have additional questions about your own implementation for ACR-013, we encourage you to get in touch with us at [email protected]. We'll do our best to answer. You may also want to consider signing up for our one-time-review service, or our certification service, both of which can help you stay in compliance with all of our Application Certification Requirements.

 

 

 

 

Nine Ways Ad Pollution Ruins your Internet Browsing

By Keven Goh 

Introduction

Millions of people around the globe rely on the Internet for work, communication, research, and entertainment. However, along with the development of technology comes the inevitable rat race for profits. Companies have flooded the Internet with advertising. Advertising isn’t inherently bad; websites require an income to sustain themselves. However, when websites and advertising networks resort to unfair practices that annoy, harass, trick, or take advantage of consumers, they create ad pollution. This blog examines nine ways that ad pollution ruins your browsing experience.

1: Ad Pollution interrupts your browsing

Have you ever been peacefully browsing a website, just for a massive ad to pop up out of nowhere, blocking your entire screen? How about videos that are bisected by unskippable ads, making you forget what you just watched? Ad polluters seem to believe that by interrupting you, they’ll make more money.

2: Ad pollution disguises itself as content

Another trick ad polluters use is disguising themselves as a normal part of whatever website you’re looking at. One example is how native ads are designed to look like news articles. Another example is in search. If you’ve ever used a search engine, you’ve probably noticed that many of the returned items are ads, formatted and placed to look like actual search results. Consumers mistakenly click on these native ads and fake search results billions of times every day.

3: Ad Pollution distracts and annoys

It’s part of an ad’s job to try to get your attention, but ad polluters make the attention-grabbing way too excessive. We’ve all seen those ads that flood your computer with irritating notification sounds, or use flashy, bothersome videos as they attempt to get your attention.

4: Ad Pollution chases you

Sometimes ad pollution launches into a new browser window or tab without your consent. They try to act like this is part of your browser’s normal function. Maybe you’ve closed your browser, just to discover that some ad polluter secretly opened a separate window linking to their site. Often these ads keep opening other browser windows as soon as you close them.

5: Ad Pollution overwhelms you

Ad polluters may embed so many ads throughout a news article or blog that it’s impossible to pay attention to what you’re trying to read. While this may earn them more money, it makes it difficult to get what you wanted out of that site to begin with.

6: Ad Pollution tricks you into clicking

Have you ever tried to click “play” on a video, only for an ad to pop up below your cursor right before hitting the play button? Some ad polluters are even more direct, throwing pop-up ads at you even after content consumption begins, or forcing you to watch an ad for a set amount of time before getting to the content itself.

 

7: Ad Pollution disguises itself as website functions

How many times have you clicked on what you thought was a download button, only to discover that it was an ad that redirected you to a different website? What about play buttons or even exit buttons that are disguised as ads? This kind of ad pollution is especially annoying because they don’t even bother telling you what they’re advertising; they just trick you into clicking.

8: Ad Pollution targets you without permission

We’ve all been through it—we search for something, then get bombarded by ads based on that search across every other website we visit. Targeted ads make more money, and ad polluters turn on targeting by default, without your explicit consent. Maybe you clicked to allow cookies, but the supposed “consent” to use your own information to target you was buried deep inside a privacy policy.

9: Ad Pollution resists when you try to turn it off

Ad pollution doesn’t make it easy to disable. Turning off ad personalization might require sifting through pages of convoluted menus to find the right option. It may require you to enable other forms of personalization or data tracking, like third party cookies. And since many ads on websites are auctioned to the highest bidder, disabling one network’s ad personalization can still result in you being targeted by other ad networks. Furthermore, when you try to disable ad personalization, you may be threatened that you’ll just end up seeing more ads.

 

A vibrant security ecosystem *can* work

Last week, AppEsteem was mentioned in several news articles reporting on the VPN apps we listed as Deceptors. We listed them after our research showed these apps automatically installing self-signed trusted root certificates without informed user consent for the risk that this introduced.

Here’s some links to the news articles: one on techradar and one on cnet.

We are already seeing progress from some of these VPN apps to fix these Deceptor-level issues. Some of the apps now obtain informed consent; other apps are moving away from introducing this security risk. Both approaches bring a better, safer experience to consumers of VPN apps.

Driving change across an industry isn’t easy: the reason this worked is because of a vibrant security ecosystem:

  • Our AV partners use our research and feeds and usually detect/block active Deceptors and allow Certified apps. This is a direct way to let vendors know when they need to change, as it affects their ability to keep their apps on a consumer’s device.
  • Security articles in the media bring attention, encourage more AVs to use our research and feeds, and send a message to the vendor’s employees and investors that their app needs immediate attention.

We are excited by these developments and are looking forward to continuing to work with VPNs and other apps to help facilitate a safer online environment. We love how the security industry can work together to improve consumer safety!

 

Copyright © 2024 - Design by FS