Confirmation Bias and Algorithms


Disassembling Platforms, Reassembling Sociality

Van Dijck, José. “Disassembling Platforms, Reassembling Sociality.” The Culture of Connectivity: A Critical History of Social Media (2013): 24-43

Reflections by: Véronique Hamel and Alexandre Pouliot

Introduction

Van Dijk starts this chapter by briefly presenting the coevolution of the music platform ITunes and the music industry on economic and legal grounds. He introduces the idea that, to better understand the reciprocal influence of microsystems on their surrounding ecosystems, it is useful to distinguish between the use of a given technology by users and the socio-economic structure behind this technology. The goal of this dual analysis is then to recognize common norms and mechanisms through the interoperability of different platforms.

“we can distinguish five significant concepts that help unpack the technological dimension: (meta)data, algorithm, protocol, interface, and default. These terms have in common meanings that carry beyond the technological realm into the social and the cultural.” (p.31)

Layer 1: Platforms as technocultural constructs

Element 1: Technology

The author is interested in the sociocultural aspect of a platform’s different technological components, since the very architecture of softwares and web platforms is the result social dynamics and norms that got encoded in the structure. “Data” encompass individual and collective properties, like a name, a phone number, derivative products, like buying suggestions on Amazon. Protocols define the frame of conversations between different actors about privacy and subversion. In turn, default features regulate users’ behavior on those platforms.

“The confrontation between implicit usage and explicit use embodies part of a negotiation process between platform owners and users to control the conditions for information exchange; such struggle also bares disputed norms and values.” (p.35)

Element 2: Usage-users

Van Dijk draws a distinction between the implicit and explicit user of a platform. Implicit user limits himself to the interactions that were built in the architecture of the platform and does not go further. The explicit user, however, departs from default interactions and changes the parameters of the platform by his very activity on the platform. This explicit user is visible in statistics and demographics and can be studied as an experimental or an ethnographic subject.

“Users and owners share the objective of having “good content” flow sinuously through the ecosystem’s arteries, but their interests also diverge. First of all, where users favor multiple forms and formats, platforms prefer standardization of content and uniform deliverance.” (p.36)

Element 3: Content

A platform’s content is shaped in constant negotiation between the various interests and tastes of users and the professional standardisation will of owners. Both have the goal of creating “good content”, but opinions may vary on what criteria establish “goodness”, thus the constant negotiation.

Layer 2: Platforms as socioeconomic structures

Element 4: Ownership

van Dijk defines ownership models as to who owns the platform and places different models on a spectrum that goes from totally for-profit, investor-owned platforms (Facebook) to totally nonprofit and nonmarket ones. Now, however, there are a lot of partnerships to share data and integrate each other’s functions between platforms.

“[…] even as little as ten years ago, the coding of social actions into proprietary algorithms, let alone the branding and patenting of these processes, would have been unthinkable.” (p.37)

Element 5: Gouvernance

The author defines gouvernance as “how, and through what mechanisms, communication and data traffic are managed.” (p.38). The main gouvernance issues on social media platforms are inscribed in end-user license agreement (EULA) and terms of services (ToS). However, as van Dijk notes, most users do not read them and the law is not optimal for this realm, so there are alot of grey areas that lead to user discontent or problematic practices.

“Facebook’s privacy policy, for one thing, is known to be more complicated and longer than the U.S. Constitution. Even without reading the rules, though, Facebook users encounter changes in gouvernance through altered interfaces, usually without formal notice from the platform’s owner […]” (p.38)

Element 6: Business Model

Business models are, in short,social media platforms make money out of their activity. While some of them use pop-up adds, most platforms find this practice risky because it risks putting off their precious users. Hence, marketing practices of the cultural industry on social media platforms are constantly changing, using ‘influencers’ and friend or algorithmic suggestions to publicize other products and make money out of this publicity. Selling metadata (with users’ by default agreement) is another common business model.

“Sophisticated mathematical models for analyzing aggregated data and predicting social trends are turning the incessant flow of data into a potentially lucrative connective resource.” (p.40-41)

Conclusion

van Dijk closes his chapter by reaffirming that social media platforms shape sociality just as much as sociality informs those same platforms. Moreover, one must always keep in mind that any given platform takes place in a constellation of other platforms and that relations between them constantly oscillate between collaboration and competition. Finally, analyzing social media and algorithms require a multidisciplinary approach to do justice to the complexity and multifaceted nature of the phenomenon.


Explanation in Artificial Intelligence: Insights from the Social Sciences

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

Reflections by: Véronique Hamel

Introduction

Explainable AI (XAI) should learn from human sciences as to how explanations are produced and given. It turns out that humans use both social norms and biases in their production and their evaluation of explanations. Understanding and using these norms and biases could make XAI easier and more accessible. Strangely, however, Miller found out that most research in XAI does not build on the large body of research and knowledge on explanation production and evaluation, but rather uses the intuition of the researchers and developers as to what constitutes a good explanation.

“The very experts who understand decision-making models the best are not in the right position to judge the usefulness of explanations to lay users — a phenomenon that Miller et al.refer to (paraphrasing Cooper[31]) as “the inmates running the asylum”.” (p.2)

Miller also makes a difference between explainability, namely how a person can understand how and why a certain decision is taken in a given context, and explanation, which is the act of explicitly explaining a certain decision. Both are beneficial for building AI users’ trust and for making AI ethical. Basically, what Miller argues is that what constitutes a good explanation is different for a human and for a machine: although statistics and probabilities yield more precise and complete explanations, context and causes are more important to a good explanation for humans.

“Building intelligent agents capable of explanation is a challenging task, and approaching this challenge in a vacuum considering only the computational problems will not solve the greater problems of trust in AI.” (p.2)

Four teachings from the social sciences

Explanations are contrastive

A person rarely asks an explanation only as to why a certain event occured in and of itself. Rather, the event is questioned in relation to its alternatives, whether explicit or implicit. The same goes for a decision or an action. For example, when someone asks “Why did they hire Tom?”, the person is not asking about what intrinsic characteristic Tom had to be hired, but rather why was Tom chosen rather than Kim, why did this company rather than that other company hire Tom, etc. Some authors refer to the actual situation under scrutiny as the fact, and to the question’s implicit or implicit alternatives as the foils.

“[…] people ask for explanations about events or observations that they consider abnormal or unexpected from their own point of view. In such cases, people expect to observe a particular event, but then observe another, with the observed event being the fact and the expected event being the foil.” (p.9)

Explanations are selected in a biased way

Most often than not, a good explanation is not an exhaustive listing and analysis of all the causes that pile up to causing a certain event. Rather, the person explaining will select one or a few causes or workings are offer it to the person asking for the explanation. But how these many intricate causes boil down to one, and how that single explanation is selected is a question of human bias. There are a good number of criterias for selecting a “good” explanation, the first one being how well the explanation can prove that the fact is more plausible than the foil (or the counterfact). Other criterias are

“With respect to explanation in AI, persuasion is surely of interest: if the goal of an explanation from an intelligent agent is to generate trust from a human observer, then persuasion that a decision is the correct one could in some case be considered more important than actually transferring the true cause” (p.8)

Causes are more important than probabilities

Explanations are social and depend on people’s beliefs

Applications for XAI

Miller states that what could easily and intuitively be provided by developers and engineers as an explanation for their algorithm or the decisions it makes would not be useful nor interesting for the users asking for XAI. It is thus important that XAI considers the way people understand and treat explanations if developers really want people to understand how AI or certain algorithms work.

“Research indicates that people request only contrastive explanations, and that the cognitive burden of complete explanations is too great.” (p.11)

Questions:

  1. Miller is mostly concerned about explainable AI (XAI), but does not talk much about ethical AI (EAI). Do XAI and EAI necessarily go hand in hand? In other words, does the fact that the decision made by an AI agent is understandable and acceptable by a majority of humans mean that this decision is ethical?

  2. Although it is true that mimicking how humans explain their decisions to other humans might ease human-AI interactions, should we be wary of transposing human biases into AI-led decisions and explanations? For example, scapegoating is a very popular explanatory shortcut for humans, but it is by no means ethical.

  3. “With respect to explanation in AI, persuasion is surely of interest: if the goal of an explanation from an intelligent agent is to generate trust from a human observer, then persuasion that a decision is the correct one could in some case be considered more important than actually transferring the true cause.” (Miller, 2019, p.8) Is this idea problematic or simply pragmatic? What are the ethical implications of it?

Copyright © 2020 IEAI, Inc. All rights reserved.