A "Wicked Problem:" Opening the Black Box of an Algorithm-Driven Information Environment

Isabelle Freiling, Department of Communication
Isabelle Freiling researches “wicked problems.” Introduced by Horst Rittel and Melvin Webber in 1973, a wicked problem is a social problem characterized by complex and entangled stakeholders, with little to no consensus on definitions or background, and where the possible implications are wide reaching and confusing. The way that artificial intelligence impacts society is an example par excellence of a wicked problem, and Freiling is working to understand the black box of the algorithm-driven information environment that exists in the modern world and the ripple effect of implications for researchers and the broader public.
Freiling, an assistant professor in the Department of Communication, is one member of the University’s inaugural cohort of One-U Responsible AI Faculty Fellows. During the three-year fellowship, she will be doing research with an interdisciplinary team focused around the thematic areas of health and wellness and the environment. Freiling highlights how crucial interdisciplinary work is: “Many of the problems we face right now in science or in society do not fit perfectly into how a university is structured, with its knowledge silos. While those silos can be very useful to generate discipline-specific knowledge, real world problems don’t necessarily fall into only one of those silos. Instead, to solve wicked problems like AI, you might need scientists from several departments, with different areas of expertise to work together.”
Avery Holton, Chair of the Department of Communication, emphasizes how important this research is. “Innovations in AI are quickly encouraging many fields to evolve, adapt, and advance, and Communication is at the core of those changes,” he says. “Dr. Freiling’s work in AI takes on efforts important to many industries, especially those with the potential to lead the kinds of changes in research and in practice that we need for sustainable and successful scholarship and workforces.”
Since the vast majority of information in the modern world is mediated by algorithmic delivery platforms – social media, search engines, even websites delivering content based on a user’s preference data - it’s crucial for researchers to see what those mechanisms are in order to correctly parse out things like media effects on human behavior. When researchers do not know who saw a message, how are they supposed to measure the effects of that message on humans?
However, for a variety of reasons, most technology companies are not forthcoming with their algorithms. Some companies may be concerned about influencers and other content creators trying to “game” the system, and in certain contexts, there may be security risks to an open algorithm. Freiling gives the example of an AI applied to a CRISPR gene editing model – in malicious hands, this technology could do a great deal of damage by allowing unethical actors to access extremely sensitive technical information about how to modify the human genome, potentially unlocking a Pandora’s box of unintended consequences for hereditary lines.
This is where Freiling’s work comes in. Part of her work is determining methods for researchers to study media effects when so much of the process takes place within a black box. “We need to develop better ways for the scientific community to get access to the data they need to study online information environments like social media that are shaped so much by AI,” she says. “It’s not an option to not study it.”
She notes, “Some tech companies do work with researchers to study the effects, but the power in those collaborations still rests with the companies.” They can typically define what messages they want to have fall under “misinformation,” for example. “We have also seen them manipulate their algorithms during such a collaboration, which can influence results in ways that they might be more positive for the company’s goals.” Outside of those collaborations, Freiling says, "Right now, we do proxy research with the best data we have, but those study designs are not likely to give us results that apply to the real-world as much.”
"We need to develop better ways for the scientific community to get access to the data they need to study online information environments like social media that are shaped so much by AI. It’s not an option to not study it.”
When asked what message she would most want tech companies and AI developers to hear from the humanities, Freiling emphasized the need for collaborative research access. "I would like them to work with scientists and regulators to find a way that we can make algorithms or models accessible to academic researchers while also acknowledging that they couldn't be shared widely."
Beyond her research on these data access issues Freiling also sees other issues that need to be studied surrounding responsible AI. First, there is a public science communication issue to address. “We need to have discussions early on with the affected audiences – who is differently affected by AI?” she asks. “We need this public engagement with science to see what are concerns or risks that specific groups see. Of course, most of the time, members of the public who engage in those efforts are already interested in science. We need to overcome this selection bias to bring in a wider range of affected people to participate, not just those who are already interested in AI or science.”
Second, Freiling also notes that public engagement has to be done in good faith. She emphasizes that this engagement must go beyond token inclusion to meaningfully incorporate feedback, particularly from different, affected groups. Depending on the application, this might mean involving religious or conservative audiences who may feel scientists don't represent their views. "We should avoid leaving people out of the discussion – AI is affecting so many of us," she states.
Still, Freiling is optimistic. “Science communication can help identify ethical and moral issues, risks that certain groups see, and what would be the consequences if we did it like this or that,” she says. “To develop AI responsibly, the science communication perspective needs to be brought to the technical side of AI."