The German Research Foundation (DFG) is funding the project, which combines the perspectives of law (Prof. Dr. Hannah Ruschemeier) and philosophy (Prof. Dr. Rainer Mühlhoff), with around € 698,000 over three years.
Background: The spread of networked digital media technology is fueling a societal revolution in the production and processing of knowledge. As a result, certain applications of artificial intelligence (AI) are becoming politically, socially and economically influential factors, as predictive models can be created from the individual data generated in everyday interactions with digital media. What is special about this widespread form of data-based AI is that the associated risk not only affects the individuals on whose data the models are trained, but also any third parties. A new form of informational power over individuals and social groups is emerging. "This development goes hand in hand with an erosion of autonomy because individuals lose control over socially relevant classifications attributed to them, which leads to a change in social, economic and political relationships," explains legal scholar Prof. Ruschemeier. "At the same time, it harbours dangers for democracy and the political public sphere and promotes ubiquitous commercialization, which in turn leads to unequal treatment, new forms of discrimination and unforeseeable effects in numerous areas of society." And philosopher Prof. Mühlhoff adds: "The use of such models is often so broad that the technological apparatuses themselves become structural factors in society, thereby reinforcing social inequalities."
The researchers want to systematically examine the dangers of predictive AI technologies and also draw up effective regulatory proposals to promote the opportunities of the technology and minimize its dangers. "We will theoretically analyze the phenomenon of predictive knowledge production in the close interlinking of philosophy and law in order to describe the basis for the ethical and legal assessment of the effects," says Mühlhoff. "Fundamentally, it is about the erosion of privacy in the data society, which is being called into question in new ways by the widespread use of predicted information: The most virulent form of violation of privacy currently lies in the prediction of personal information," concludes Ruschemeier.
Further information for editorial offices:
Prof. Dr. Hannah Ruschemeier, Osnabrück University
School of Law
hannah.ruschemeier@uos.de
Prof. Dr. Rainer Mühlhoff, Osnabrück University
Institute of Cognitive Science
rainer.muehlhoff@uos.de