The concept of “big data” has been the subject of considerable hype and speculation in recent years. So much so that the dominant technologies and technical practices that generate big data — data analytics, algorithms and machine learning — are now commonly described as “artificial intelligence” instead. As a result, Ian Bogost argues, there has been “an explosion of supposed-AI in media, industry and technology.”
Despite emerging punctures in the big data and AI hype bubbles, it remains hard to dispute that digitally produced, collected and analysed forms of data have been vested with certain powers and influence. Tobias Rose-Stockwell has gone so far as to argue that “Every time you open your phone or your computer, your brain is walking onto a battleground. The aggressors are the architects of your digital world, and their weapons are the apps, news feeds, and notifications in your field of view every time you look at a screen.”
Digital data, then, are a source of power for those organizations with the technical resources, computer languages and skilled practitioners to collect, mine and communicate them. Governments and corporations have become expert in “collecting, storing, retrieving, analysing and presenting the data that records what people do and say,” argue Evelyn Ruppert, Engin Isin and Didier Bigo in an important new article on ‘data politics.’
“This ranges from who communicates with whom, who goes where, and who says what — and much more besides,” Ruppert and coauthors add. “There has never been a state, monarchy, kingdom, empire, government, or corporation in history that has had command over such granular, immediate, varied, and detailed data about subjects and objects that concern them.”
Data has therefore become a major object of economic, political, and social investment for governments, corporations and other agencies and authorities, with the vast warehouses of big data they collect becoming a hugely valuable source of knowledge and power.
Education Data Power
In the field of education, a growing number of organizations has begun to seek to consolidate their own power and authority through developing expertise in big data, algorithms, machine learning and artificial intelligence. For example, Natasha Singer in the New York Times has detailed how Google has become a major player in the educational data field through its low-cost Chromebooks and G Suite for Education apps.
Another prominent corporate player is Pearson, the world’s largest education company, which is also seeking to restructure and streamline its business around digital education and data. Over the last decade and in collaboration with its vast web of partners, it has been putting together the technologies such as data storage, machine learning and data analytics systems, the programming languages of code and algorithms, and the skilled expertise of data scientists, software developers, engineers and analysts required to become a “digital-first” education company.
Pearson’s ambitions involve generating massive databases of educational data and utilizing real-time data analytics, machine learning algorithms, and artificial intelligence to monitor, measure, make calculations and produce facts and knowledge about education. As I have documented in my own research, Pearson has previously established the Learning Curve data bank to help policymakers interpret internationally comparative school data through accessible data visualizations; launched a Center for Digital Data, Analytics and Adaptive Learning; and invested in and partnered with the learning analytics infrastructure provider Knewton to analyse data from Pearson e-learning products, among many other activities.
Pearson has also produced a vast library of resources, including glossy reports, blogs, working papers, frameworks, and presentations, plus public appearances by some of its high-profile staff, to disseminate its key messages about digital education and the power of educational big data.
Although Pearson has recently restructured its business and scaled back or phased out some of these activities, it has also announced a partnership with IBM Watson and invested heavily in product development to produce its own in-house adaptive learning capabilities. Along with IBM, it has been spearheading the development and application of artificial intelligence in education. Given Pearson’s recent financial difficulties and reputational decline, its primary strategy for restructuring appears to be based around digitization and datafication of education.
According to Albert Hitchcock, Pearson’s chief information officer, speaking to Information Age in 2016, its objective is to move the business from being a publishing company to a “21st century digital platform body.” To achieve this objective, Pearson is creating a “global digital learning platform” utilizing cloud services, analytics and machine learning across all its products and services.
“So when you look at the use of social media and Facebook, when you look at the use of Amazon for the retail environment, when you look at the use of Netflix for the media environment, we want to create a platform that has a number of those attributes associated with it,” Hitchcock said.
“Technology is underpinning a lot of our strategy here, in terms of how we transform the learning experience into a sort of digitally led experience that is highly personalised,” Hitchcock added. “We believe through this use of analytics and personalisation, and the ability to adapt content on the fly, we can actually shorten the learning process and improve learning outcomes and the ability to go through the assessment process.”
Pearson’s aim is to provide a personalised media experience delivered using analytics, and through understanding learner behaviour via the data generated during learning activities, all of which it seeks to achieve through a single platform based on cloud hosting and APIs. Given that IBM Watson is being marketed as a cloud-based, API-driven service for “powered by Watson” applications, some commentators suspect that Watson will provide much of the adaptive and machine learning capacity for Pearson’s global digital learning platform.
Whether its global digital learning platform and IBM partnership will save Pearson’s business fortunes remains unknown. However, beyond being a strategic business objective, its restructuring as a “digital-first” company focused on data analytics and machine learning suggests that Pearson is seeking to consolidate its power and authority over big data collection and use in education at a massive global scale.
In so doing, it is seeking competitive advantage as an analytics vendor in the ed-tech market, while reshaping educational practices to fit its digital-first vision of a data-driven, cloud-based future of education. In other words, Pearson is engaging in data politics.
The digitization and datafication of education exemplified by Pearson needs to be understood as a significant manifestation within the education sector of the growing power and politics of data and metrics in advanced states.
According to Evelyn Ruppert and colleagues, “data and politics are inseparable” and data has become “an object whose production interests those who exercise power.” Historically, they argue, “states, agencies, organizations, and institutions — often assembled in different combinations as governments — constituted their authority, legitimacy, and legality by producing knowledge about objects and subjects through establishing method and data regimes such as censuses, indexes, registers, rolls, catalogues, logs and archives.”
These techniques allowed the state and governments to maintain “effective monopoly on data.” However, the “monopoly of the state over data production, collection, and even interception is increasingly challenged,” they argue, “by corporations, agencies, authorities, and organizations that are producing myriad data” about all sorts of interactions, transactions, behaviours and movements.
If the locus of power with regard to big data is not with political centres of authority, then where is it located? In a recent article reconceptualizing “elite power under advanced neoliberalism,” William Davies has argued that traditional forms of expertise, authority and judgment have experienced diminishing public trust in recent years, with experts and elites questioned over their lack of “objective judgment over the ‘facts’ of what is taking place.”
Instead, computational systems which can process huge quantities of data, and those who manage them and can report what the numbers are “saying,” are gaining increasingly public status and authority over politics and expert judgment.
In this context, according to Davies, two distinctively new forms of elite power have emerged. On the one hand are “cyborg intermediaries”: actors defined by their “capacity to sense what is going on in real-time” through “nonhuman, real-time feedback technologies.” These actors are “cyborg” because they are not always or not wholly human, but exist in the form of software, code and algorithms that are able to perform the necessary calculations.
On the other hand, says Davies, are “diplomatic intermediaries” who are able to translate and narrate the flow of data to make it intelligible for public audiences, political agents and states. In other words, Davies concludes, “the rise of big data privileges those capable of mediating between mathematical analytics and empirical narratives about what is being represented.”
Davies is writing about the financial services sector and financial technology, or “fintech,” but his insights into the intermediaries that now possess elite power apply well to the ed-tech sector too. Through organizations such as Pearson and its partners, educational processes are increasingly being delegated to computational systems that can process large numbers and to the human experts who can narrate them.
By partnering with Knewton and IBM and cultivating its own in-house capabilities, Pearson has developed its own cohort of “cyborg intermediaries” whose task is to conduct real-time analyses of massive quantities of data through data analytics systems. Knewton’s adaptive learning software or IBM’s Watson applications are the nonhuman cyborg intermediaries within Pearson, along with the human software engineers, data analysts, computer scientists and statistical researchers that are able to work with these systems.
Pearson has also positioned itself as a diplomatic intermediary and is able to narrate the meanings, potential and impacts of educational data and digital technologies for different audiences, the public, the media, practitioners, and policymakers and state agencies alike. Its glossy reports, accessible data banks, web resources, and its high-profile public figures are all evidence of how Pearson is seeking to extend its power in education through the translation and narration of data, not just its technical analysis.
Organizations such as Pearson are seeking to consolidate their power within education through both the production and the narration of digital data. In so doing, Pearson has elevated the role of nonhuman systems and processes to participate in monitoring, analysing, calculating and shaping the complex activities of teaching, learning and assessment.
As contemporary policy agendas permit more and more digital-first organizations to sell their data analytic products to schools and colleges, it is likely that some will attain privileged status and elite power in state education. While in the past, state governments were able to maintain monopoly over data production and its use as a source of knowledge and power, Pearson’s ambitions to create a single global digital learning platform suggest it is seeking a form of data monopoly and elite power in education that lies beyond state authority.
Pearson is one of myriad companies seeking authority through expertise in educational data production and the capacity to mediate between mathematical analysis and public presentation. With Google, Amazon and a host of other corporations and startups moving into educational data generation and use, a new form of data politics is opening up within education. This data politics is not merely economic, as different organizations compete for market share. It is primarily a competition for the sources of knowledge about processes of teaching and learning, and a competition to narrate the data in ways that might make educational practices amenable to being reshaped and reformed.
Banner image credit: beachmobjellies