Reassessing Collective Intelligence

“@MichelleFields you are totally delusional”: Collective intelligence in 2016

Back in 2005, Tim O’Reilly, publisher and technology pundit, posted an essay describing “Web 2.0.” In it, O’Reilly attempted to describe what had changed about the internet since the early 2000 tech bubble and what had become called “Web 1.0,” the first generation of the  business-oriented, public web.

One of the changes of Web 2.0 that O’Reilly identified was “harnessing collective intelligence,” using the group features of the web to develop new smart products. One effect of this collective intelligence, “turning the web into a global brain” as he puts it, was that blogging, as a forerunner of social media, became “a reflection of conscious thought and attention,” and, as a consequence of this attention and filtering, a new form of media would be enabled, one not decided upon by traditional gatekeepers but rather a product of this collective intelligence. Although this vision of a new media is presented descriptively, that description is utopic, predicting a democratic vision where “ ‘we the media,’… not a few people in a back room, decides what’s important,” the implication being that this collective intelligence would lead our media consumption — and personal knowledge — in the direction of successes like Google’s collective intelligence search engine or Wikipedia’s crowd sourced knowledge.

If we flash forward to 2016, it is hard not to wonder how this democratic vision of collective intelligence has gone awry. Rather than unleashing a new era of democratic knowledge production, many of the collected forces of our current time have reinforced erroneous and myopic views of the world.

To give but one example, the 2016 U.S. presidential election is rife with falsehoods and fabrications, largely personified in the Republican frontrunner Donald Trump. The most recent example — and one of the most odious — has been the smearing of reporter Michelle Fields after she was grabbed by Trump campaign manager Corey Lewandowski while trying to ask the presidential candidate a question. Both Trump and Lewandowski repeatedly denied the incident occurred and smeared Lewis as “delusional” and not a reporter but someone who engages in “attention seeking.” When video evidence supporting Fields’s account of events surfaced and Lewandowski was arrested on a misdemeanor battery charge, commentators wondered if the lying and attacks would harm the candidate’s position in the Republican primaries. As the New York Times (euphemistically) put it regarding Trump’s previous falsehoods:

For much of the past year, fact-checkers have struggled to keep up with the frequent truth-stretching and wholesale inaccuracies of Mr. Trump and his campaign, with little discernible effect on his support among a large portion of the Republican electorate.

Although this is but one example, and it is not yet clear how the fallout from this event will affect Trump’s campaign, it is indicative of a pattern that has manifested in online communication since the publication of O’Reilly’s essay: the rejection of ideas, and even facts, that are inconvenient or damaging to one’s own beliefs. This prompts us to ask: What is the effect of collective intelligence if, as in Fields’s case, such seemingly damning evidence can simply be waved away with little or no consequence?

Answers to this question lie beyond technology punditry, in the realms of network research and cognition. We know that our personal networks can become “echo chambers” that never challenge our opinions (pdf) and even when we are presented with alternative opinions to our own, we can remain unswayed because particular network structures can fool us into thinking that an opinion — such as our own — is popular or widely held when it is not. Surprisingly, more information is not always a net benefit in such situations. As Hutchins has shown, high levels of information sharing in closed networks can lead to nearly unshakable beliefs and confirmation bias, regardless of the presentation of new evidence (pp. 252-254).

Interestingly, other trends noted by O’Reilly, such as the establishment of “platforms” and curation of closed data as a resource, have undermined his utopian vision. As social sites like Facebook increasingly manage users’ media diets, the effects of echo chambers and confirmation bias become more pronounced, a result that is furthered by the closed nature of filtering policies on most social sites.

So what is the answer? DML practitioners need to be careful to not simply accept utopian or dystopian predictions about technology, but rather seek to understand the actual effects of our digital tools as they present themselves in the real world. Consequently, we have to focus on teaching techniques that do not rely on simplistic predictions about the effects of technologies by account for the real benefits and drawbacks of our digital moment.

In my academic work, I have argued that diversifying one’s informational inputs (paywall) and actively attempting to create new networks can help to mitigate and counter these effects. By maintaining diverse informational networks and creating new networks, individuals might be able to avoid the echo chambers closed networks can produce and the confirmation bias Hutchins describes and lessen the power of media filters like Facebook.

To this end, DML practitioners should be modeling and instructing students in how they might diversify their networks — to engage networked culture in networked ways. For example, instructors could encourage students to begin cultivating information sources outside of their social networks, such as by deliberately mixing a variety of news sources into their personal media diets. One way of doing this outside of the influence of filtered media sources like Facebook would be to subscribe to a variety of RSS feeds from a variety of information sources across the ideological spectrum.

Whatever our approach, it is important that DML practitioners teach students how the networked forces of digital culture impact our knowledge and thinking, and not mislead them into thinking either that digital technologies are inherently democratic or inherently demagogic. Only by encouraging clear-headed examinations of digital tools can we avoid these two extremes.

Banner image: Republican Presidential candidate Donald Trump’s campaign manager Corey Lewandowski (center) is seen allegedly grabbing the arm of reporter Michelle Fields in this still frame from video taken March 8, 2016 and released by the Jupiter (Florida) Police Department March 29, 2016. Handout photo via Reuters