The Hard Problem For ICT4D: Net Neutrality, Zero Rating And Algorithmic Culture

Is the ad-based business model destroying the Internet and, moreover, is it destroying our society?

As someone who has been a long time advocate of free ICTs like email, social networking, Google Docs, and all other things free, I didn’t arrive at this focal point willingly or with any preconceptions. Like many of you reading this post, I had simply grown accustomed to ignoring the targeted ads that followed me around online, and in fact viewed it as a fair exchange for the technologies being offered. But as technologies have continued to advance, their downsides are also becoming bigger and more visible.

Last week, less than two full years after India snubbed Facebook’s Free Basics service, effectively banning all “zero-rating plans”, meaning anyone offering data exemptions or free service to only a limited set of services of sites, the now Republican-controlled Federal Communications Commission (FCC) announced that it will replace net-neutrality rules passed during the Obama-era, in favor of less regulations for internet service providers (ISPs). The net neutrality rules were initially put in place in 2015 to ban discrimination against lawful content providers and the blocking, throttling and paid prioritization of consumer-sought legal content online.

For large ISPs, like the Comcasts, AT&Ts and Verizons of the world, they will gain more power and control, particularly to help them better penetrate the ad-targeting and digital marketing world. Historically, the most valuable data for the algorithmic process of ad platforms, like Google and Facebook, has been the massive amounts of search and social media data they collect directly or via 3rd parties. While ISPs do not have access to collect this type of encrypted data themselves, they can play a useful role in helping the big digital advertisers with linking users’ identities across devices (mobile, laptop, tablet, TV, etc). With the proposed broader latitude given ISPs, many people are apprehensive about how this will affect the costs of Internet services and the way ISPs interact with consumer data going forward and whether this will signal the unraveling of net neutrality.

What we do know, is that earlier this year, even under the current net neutrality rules, the FCC already ruled zero-rating plans, for example a mobile service provider allowing a particular music streaming service to bypass it’s data limits, permissible. Net neutrality advocates believe that once the rules go away, we will see more drastic measures from companies, like a “fast lane” for it’s own content or a partner’s content and a “toll road”, or slowing of connections, for others. For many, the announcement by the FCC poses legitimate concern for the future of open internet.

In addition, there is growing contention over the algorithms that power the pervasive ad-based business model. Algorithms are computer programs written as sets of step-by-step instructions, which determine what information we are exposed to and which decisions are made about us. They allow digital advertisers to effectively tailor and target ads. The more data they have, the better they perform. So nearly every website that we as consumers visit is sharing data with ad networks and 3rd parties, which is then collated, stored and fed to these algorithms. This enables them to continuously learn from our behaviors and people like us, so that they can infer and target.

And despite the dismay of many people, “big data” and machine learning has enabled ad-based networks to use a lot more information than simply our browsing history and basic demographics to target us with ads. They increasingly ping our off-line data, they know our actual purchases, transactions and financial history. Massive data collectors like Facebook learn from our post history, likes, even every location we’ve ever logged in from. Nothing is ever actually deleted. And from all this data, they can infer attributes about us such as our religious and political affiliations, sexual preferences, even our weaknesses and vulnerabilities (i.e. - intelligence, happiness, addictive traits, etc).

So what happens when we are not just being targeted with ads?

It seemed that just as quickly as we noticed that tailored ads were following us around the Internet, the digital ads economy was making further leaps. The same algorithms were also arranging and delivering other forms of personalized content to our news feeds and curating lists of highlighted content across our social media platforms and the sites we frequent like YouTube.

While it may be easy to see the usefulness in having content specifically tailored and automatically populated to our devices, concern quickly emerges when we discover these algorithms may be edited and purposefully engineered to, for example, suppress certain political perspectives while intentionally promoting or giving priority to others.

Facebook was alleged, although CEO Mark Zuckerberg vehemently refuted it, to have routinely edited their “trending topics” algorithms in order to tone down conservative views during the 2016 U.S. presidential elections. Whether you believe that this type of algorithmic gerrymandering is occurring or not, there should be alarm from the realizations that: 1.) We are vulnerable to being nudged and manipulated by such “policy directed algorithms”, and 2.) If a digital platform like Facebook had decided to promote a particular viewpoint, how would we have even known about it unless they disclosed the fact?

Through our tech businesses we have designed architectures of persuasion, simply to get people to click on ads, that can be deployed on scales of billions to our individual private screens. So as citizens, we can no longer assume that our neighbors are receiving the same information, or for that matter, have any idea what information they are receiving. And without a common basis of information, it is becoming more difficult to engage in public debate, if not impossible.

Furthermore, what’s to resist the temptation of people in power from using these “persuasion architectures” to control us by using our personalized information flows to influence our behaviors, decisions and how we function? Granted, politicians have always pandered to their political bases by airing particular TV ads in only one part of the country or by mailing different mailers to different voters, but what is new, is that in the digital and social media age it is now possible to reach infinite combinations of audiences and send highly specialized messages to all of them.

President Trump was cited to have been using Facebook Ads to target his supporters to assure them that a wall will be built along the Southern border of the U.S. and then petition them for donations. The ads conflict with versions of his plan for the wall put out by the administration elsewhere.These type of ads, also referred to as “dark posts” operate exactly the same as any other digital ad. For better or worse, the online data collected on us is democratized and is available to anyone who wants access, either via 3rd-party ad exchanges or indirectly through portals like Facebook and Google who allow you to utilize their data by placing your ads on their networks. The major issue is when government and people in power send these type of ads and messages to only a subset of the citizenry, that the rest of the citizenry cannot see, and in particular messages that seem to be at odds with reality, it undermines government transparency.  

So, whether we point to the erosion of net neutrality, companies employing an editorial hand in algorithms that organize our information flows or politicians who craft dark posts to private parties, it is becoming perfectly clear that our ethical decision making processes and accountability mechanisms have not kept pace with our technology. Now more than ever, we need to constitute measures and requirements of transparency and accountability and be bound by them. Otherwise the brilliance of the digital age will be increasingly clouded by the specter of social engineering.

Those of us in the ICT4D community must confront this hard problem. We have a particular responsibility to explore the relationships between technology, human rights and an ethical perspective of ad-based business models with particular attention to rights to privacy and data protection. This will help us to start thinking better about not only the prodigious potential of digital technologies but also the growing problems associated with the digital economy.  

Previous
Previous

Hey France! Ban Education Budget Cuts, Not Mobile Phones!

Next
Next

Digital Frontiers: Gender Data, The Private Sector, And User Consent