Moving from “opt-out” to “opt-in” culture

“We’re here for consumers. Why, without them, we’d have nobody whose privacy we can invade so that we can exploit them to advertisers.”

Comment about Google under user comments on:

Teaching courses on digital marketing and ethics in information technology has been a great learning experience for me. In discussions on topics such as information privacy and security, critical issues such as identity theft, spamming, and robocalling have sparked great interest because these are personal matters for all of us as well as for the reputation management of the businesses we represent.

Encouraging learners to confidentially share their experience with these ethical and legal abuses of information technology, I have become aware of the damage that these practices can inflict on people in their personal and professional lives.

  • Automated phone calling (robocalling) from marketers that wakes people in the early morning and disturbs home life in the evening.
  • Email spam that clutters Inboxes and phishing attempts to lead the unsuspecting to Web sites where malware can be dropped onto the users’ system.
  • Credit reporting agencies that share consumer data, but do not clearly inform or provide sufficient control for users of that data sharing.
  • Web site privacy policies that don’t include, or otherwise hide or make difficult to understand any information about user profiling along with the data gathering and sharing that are its outcomes.

This is but a sampler of the many ways that as individuals, we are left with little protection and less information that could help us make sense out of preserving our privacy and security.

And while acknowledging that there are regulations which attempt to address these abuses, they are not comprehensive and typically contain loopholes which are used to perpetuate these practices. Enforcement is often toothless in this situation.

Among the greatest loopholes is the notion that as users, we need to opt-out of practices for which we don’t want to participate. In the case of Web sites and services, we should take responsibility for reading the privacy policy prior to using the service, but in many cases, we either miss the details amid the fine print or the data gathering and data sharing are not made evident. In other cases, we are simply not made aware of these practices, such as with credit reporting agencies.

It’s a shame that we as individual citizens have to hunt and peck through these hidden resources and fine print to find ways to opt-out of what is essentially an unethical practice in terms of our privacy and security, especially when it is so clear what damage that uncontrolled data gathering and sharing can cause.

Think of how much more elusive finding ways to opt-out will be with increasing use of small mobile devices such as smartphones or the family Internet-capable TV.

Rather than detail the ethical and legal abuses in which we are engulfed with our increasing dependency on online communication and transactions or the impotence of chasing after abuse with regulation, I submit that we need to overturn the current system of “opt-out” abuse and move toward a more universal “opt-in with informed consent” regulatory framework.

Yes, overarching regulation of this type can place more responsibility and cost on businesses and other organizations, but that is what we should expect in terms of our broader constitutional rights.

I am not recommending the shot-gun approach to regulation such as in the SOPA and PIPA legislative efforts where the regulation itself is abusive of those rights, but rather more focused on the mechanism of individual informed consent.

Would there be a cost in shifting the focus from requiring users to opt-out to providing informed consent of any data gathering and data sharing prior to its use on Web sites and other channels of communication?

Yes, but I propose that the cost of shifting these mechanisms in favor or transparency and choice will actually create more benefit for both providers and users in the long run.

As it stands right now, as in many aspects of our ethical and legal landscape, we have a short-term view of gain that is already failing.

As individuals increasingly become aware and experience the damage associated with unregulated and uncontrolled use of their personal data and the intrusion of their personal privacy, they will take the means to opt-out of a system that does not merit their trust and thus, their participation.

In reply, I welcome your comments on these issues and the solutions that you feel are warranted,


P.S. As of only a few hours after writing this message, I received important information related to these issues.

I received an email from Google informing me of an upcoming change in their privacy policy which allows them to share more data internally while presumably protecting user privacy with a statement “we’ll never sell your personal information or share it without your permission (other than rare circumstances like valid legal requests).”

A subsequent examination of this new policy might reveal the extent to which the above are correct in all respects or might not fully reveal the nature and extent of user profiling and data sharing, but as I have already stated, this takes some time and effort. Should I trust Google and continue using its free services (which are of great value) or cancel my accounts? That is a choice that users must now make according to Google.

Update: Here is a quote and a link to an online article in The Wall Street Journal for case-based reasoning on this broader issue of privacy protection: “Google Inc. and other advertising companies have been bypassing the privacy settings of millions of people using Apple Inc.’s Web browser on their iPhones and computers—tracking the Web-browsing habits of people who intended for that kind of monitoring to be blocked.”

Also, on the topic of robocalling, I learned via broadcast TV news, that the FCC issued new regulations on this intrusive, automated phone calling practice. While it provides some teeth in placing a legal definition of this abusive practice, it only limits automated calling, but not calls placed by people using the loophole of making some connection to an exempt non-profit organization. Thus, as in the other abusive practices, it is still a cat and mouse game where as users we are the mice.

Here are links to information on the FCC site on this topic and on opt-out practices for communications in general.

FCC Strengthens Consumer Protections Against Telemarketing Robocalls:

Unwanted Telephone Marketing Calls:

This entry was posted in blogs for business, customer experience, customer experience management, digital marketing, human factors in information systems design, information architecture, information ethics, management of information systems and technology, social change, social media, user experience, user-centered design and tagged , , , , , , , , , , , . Bookmark the permalink.

10 Responses to Moving from “opt-out” to “opt-in” culture

  1. Joe says:

    If you’re interested in privacy issues, you should read The author is the CTO of the company I used to work for and he is a well-known speaker in the privacy arena. He’s also a really nice guy (extremely smart too!).

    As far as the opt-in vs. opt-out…I don’t see it happening anytime soon. Companies that develop software and offer free services count on the fact that you won’t take any action beyond installing and using their software. A lot of people know that gmail “machine reads” your email and customizes ads based on that, I do. I’m not particularly overjoyed by that but I understand they are a business and have to make money. However, I would pay a small fee to get rid of all the advertising and all the other issues you have to deal with when you use the free stuff. It’s more about economics than privacy to some extent but if I was ever bothered by my “digital footprint” online, there’s really nothing I could do to eliminate it. In the digital world, your footprint can literally last forever.

  2. Doc says:

    Thanks Joe, for sharing that link and your insights into this issue. I would have to agree that a polar shift from opt-out to opt-in is unlikely anytime soon. Nonetheless, there are examples that are emerging. For example, Ghostery provides a useful social purpose as a free Web tracking blocker, but as explained in their Website FAQ, they need to support this free software service as a business.

    Although many software producers support free software with ad banners and similar revenue models, Evidon as the company behind Ghostery has chosen to support their free Ghostery service with an opt-in feature of its software that collects data about trackers and the ad servers behind them.

    This ranking data is collected from Ghostery users who opt-in to allowing this data to be collected and shared with the company. Evidon can then sell this data to the ad providers to support its business.

    As stated on Evidon’s FAQ page, users and their personal data are kept anonymous and the ranking service is limited to opt-in.

    Assuming that this position is clearly identified, transparent, and useful to users and advertisers alike, I think that the Evidon Ghostery business model works on an ethical level.

    Thanks for sharing,


    Ghostery™ FAQs

  3. Sarah Grech says:


    I really enjoyed reading the post on your blog. The four examples you used of ethical abuse of information technology: phishing emails, robocalling, credit reporting agencies passing along your data entries, and web site policies that make it hard for readers to understand what they are signing up for. It is unfortunate that the use of information technology has come to this. I agree that companies get off the hook for ethical abuse of information technology due to the lack of enforcement. From taking an ethics in technology class at University of Maryland University College, I have learned more about how businesses are using technology in unethical ways; and, how my use of information technology is impacted by these unethical acts. Technology keeps advancing and will never stop at this point; which can be a good thing and a bad thing. As you said, informed consent is an excellent start in enforcing ethical standards.


  4. Lucas says:

    Til this day the line between ethical and un-ethical as far as social media is concerned can be a blur. Using social media to advertise is a massive positive in terms of technology and how far we have come as a society. However, there is a chance that that line can be drawn too far by ads that pop up with products or services that are not tolerated in the first place. That can be coined in many negative ways; invasion of privacy, unlawful disturbance or just down right irritating. Such an event shows disregard by the marketer, thus I can say an un-ethical use of social media as a tool.

    Social media sight have terms of agreements and disclaimers that a larger percent of users or visitors do not read. These intsy wintsy font of text can be made more visible by the way, but the outcome from clicking “AGREE” is far greater than the size of the fonts. Most times, part of these texts make it possible for the social media site to gather your information and sell them to other groups. the groups purchasing your data then use the information to target certain ads (banners, pop ups, quick slide by) to the users or visitors, ads that the user is more likely to engage with. Now this may not be as irritating as the above paragraph scenario, but there is still a huge ethical question about the manner at which user personal information is tossed around from organization to organization. So in this case, legally the rights to distribute a users information is ok, but the ethical behaviors being applied can be much more transparent and honest. Disclaimers and bold enough yes/no permissions can be requested from users per organization wanting access.

    We can go on and on about where the benefits of targeted ads crosses the line of ethical social media marketing, but as convenient as the whole concept is with helping users make decisions, gathering background information on people behind their backs and in ways they are not shared with, is “to me,” kind of creepy and unethical. Unless there are focused efforts to improve transparency and maintain user consents there will be a huge shadow of unethical practices around social media marketing.

  5. Doc says:

    Thanks Lucas for sharing your valued experiential insights from the vantage point of a social media user. The ethical issues you describe ultimately have an effect on the ways that users will become or remain engaged, so there is both an ethical and a business perspective that are present.

  6. Doc says:

    Thanks Sarah for sharing your thoughts on ethical abuse of information technology and the need for informed consent.

  7. Rachel GA says:

    Hi Doc,

    Thank you for your insightful comments regarding this rather complex issue! An interesting article on digital privacy ends with the comment, “We seldom realize the value of something until it’s been lost” (Satell, 2014). I have to admit that I probably share my personal information a bit more freely than I should. Often this is done in an effort to simplify my life. I keep my cell phone’s location feature turned on so I can quickly find a Starbucks near me. I enter my email address in order to download a coupon for a free appetizer at Chili’s. Until recently, I’ve never researched a privacy policy on any website. I usually skim over privacy notifications from Google and Facebook. I seldom block cookies on websites in order to avoid forgetting passwords. Perhaps I have felt I could easily undo most digital tracking if I wanted to. I’m not alone in this- consumer trends typically lean far more towards cost and convenience (Satell, 2014). The Pew Research Center found that more than half of consumers are comfortable with sharing personal data in return for easy access to online sites and features, in spite of an overwhelming majority that feel concern about their digital privacy (Madden, 2014). Less than a quarter of consumers feel they can exist online without any personal data being shared (Madden, 2014).

    In our current political climate, I admit my anxiety about privacy has increased. If I travel abroad, will my social media presence be examined as I seek to re-enter my country? Will my social and political views, clearly visible on this platform, become the catalyst to further questioning and harassment? What I see in the research cited above it that consumers are aware and concerned about this issue, but do not feel informed enough or empowered to take action (Madden, 2014). The constant tide of change in digital marketing creates a challenge for the consumer. The work ahead is finding ways to strike a balance between convenience and privacy, as well as educating and mobilizing consumers to act upon their privacy concerns.

    Madden, M. (2014, November 12). Public perceptions of privacy and security in the Post-Snowden era. Pew Research Center. Retrieved from:
    Satell, G. (2014, December 1). Let’s face it, We don’t really care about privacy. Forbes Magazine. Retrieved from:

  8. Doc says:

    Thank you Rachel, for sharing your valued insights. Your personal experience and the relevant research you cite confirm our need as individuals and those who will influence digital marketing decisions in organizations to take a leading role in placing the customer first in data privacy policies.Thank you for sharing, Doc

  9. I think Facebook should automatically opt users out of some features by default, and only opt them in if they specifically choose that. When I “like” or comment on a post that is set to “public,” Facebook may choose to display that I liked or commented on the post on my friends’ news feeds. This means anyone I’m friends with on Facebook – whether or not they are friends with the poster – will view my activity. For many people, this can include co-workers, bosses, and family members.

    Many people are unhappy with this and have complained to Facebook. Their response is that the if a post is public, you should expect that anyone can see it. However, I think this is much different than *announcing* it by putting it on the news feed. There should be a way to opt out of having this show on your friends’ feeds, and any new users should be opted out by default.

    In addition to regular posts being displayed, RSVPing to a public event can result it in showing on your friends’ feeds. Some people may be attending an event which they might not want everyone else knowing about. It’s fine if someone who has the link to the event sees that they RSPV’d, but it’s another thing altogether to have it be displayed on all friends’ feeds.

    Some friends of mine ended up quitting Facebook because of this. They ended up liking a model’s photo or a controversial political post, and it caused familial or work problems. I’ve mentioned to them that they should make sure to not like a post that is “public” or “friends of friends” and that the tiny icon in the post shows the privacy level. However, they have stated that this is too confusing and too much of a headache for them, and they’d rather just leave Facebook altogether.

  10. Doc says:

    Hi Rachel,

    Thank you for sharing your experiential insights on using a social networking site like Facebook. The need for clarity in using features as well as informed consent via opt-in or opt-out is made evident by these examples and the outcomes that you describe. Thank you for providing this valued case-based learning document, Doc

Leave a Reply

Your email address will not be published. Required fields are marked *