Josh Pasek is a professor of communication and media and political science and a faculty associate at U-M’s Center for Political Studies. He is also the instructor for Michigan Online’s new course series “Navigating Disruption: Generative AI in the Workplace.”
What are the potential impacts for organizations adopting generative AI?
Generative artificial intelligence (AI) has an enormous capacity to aid in the completion of all sorts of tasks. It is useful for summarizing large bodies of text and other data, helping generate novel content, routinizing regular procedures, and pointing to and aggregating information that might otherwise be hard to gather. At the same time, there are limits to the capacities of generative AI whereby it may lead us astray or even cause harm. AI’s potential to produce well-crafted text is a tremendous resource, yet at the same time it can mask mistakes and hallucinations made by the algorithms. Training on human-produced and other data means that AI can often make assessments and determinations as well as people and far more quickly but may also mean that AI tools can replicate and even compound existing societal biases, sometimes in new ways.
As a technology, generative AI offers enormous potential value, but organizations are going to need to identify where value is really added and what kinds of secondary impacts AI use might have.
What can we learn from previous communication technologies that can inform the way we think about advances in generative AI?
Generative AI has the capacity to alter the relative value of different kinds of tasks. These types of disruptive technologies have three kinds of impacts on the tasks they influence: create new jobs while rendering others unnecessary, augment the ways that humans complete certain tasks, and/or change how individuals spend their effort within a task.
Working through questions about these kinds of impacts lets us understand whether AI is going to help us do a more complete job, increase the amount of work that can get completed, or render particular tasks obsolete. It is also going to help us work through how changes in some tasks are going to alter the kinds of skills that are in demand across the workforce. The best performers in a lot of industries may see huge efficiency gains and this could yield a winner-take-all dynamic. In other areas, whole operations may need to be reworked for AI to add benefit.
Experience with prior technologies tell us that this is likely to have huge implications for work and labor but is also liable to take some time to figure out.
How should leaders balance concerns about AI integration with the potential to enhance productivity or other benefits?
Leaders need to think broadly about how AI can assist their operations and how to prepare for any potentially problematic outcomes from AI use. They also need to think about how three factors are going to evolve: the capabilities of AI technology, the social norms around its use, and regulatory and legal framework. All of these are in flux. The best strategy to prepare for these issues is for organizations to be as transparent as possible regarding AI policies and use.
What challenges do you foresee with implementing safeguards against biases in generative AI tools?
At their hearts, AI tools are statistical prediction machines. They learn from diverse forms of data and use that learning to render their best guesses on what they should provide in response. But the data they learn from is generated by people and often varies in its availability across subgroups of the population.
Because AI has more data from western cultures and the available data comes from societies where there are systematic inequalities across gender, racial, and other lines, without supervision, AI is likely to learn from and therefore replicate biases. Although AI companies are doing some work to mitigate this, they are limited in what data is available and generated biases may depend on the context for which AI is being used.
Addressing bias requires that users understand these facts and consider the potential for the training data to yield inequities and think proactively about how to counteract them. It also requires systematic auditing of decisions and choices made by AI to assess whether biases are emerging in the outputs of the algorithms.
What skills or other professional development should leaders focus on as we prepare to adapt to AI-augmented workplaces and industries?
Individuals need to become AI literate. Almost everyone will need a basic understanding of how AI works, a recognition of the potentials for AI to generate value and to produce problematic content and decisions, and an understanding of the potential disseminations and impacts of AI-generated and augmented content. Some of this will come in the form of formal instruction, but much of it is only going to be understood as people experiment with AI and attempt to make sense of what it is generating and how it works. They also need to be aware that AI is rapidly changing and that its capabilities tomorrow will not be the same as they are today.