Could AI Speak on Behalf of Future Humans?

AI systems can increase inclusivity in group decision-making processes and provide previously unheard stakeholders a voice, but only if they are carefully developed and used.

A persistent social issue around the globe is the lack of perspective in group decision-making. Certain viewpoints are not (enough) heard and might not receive fair and inclusive representation during group decision-making debates and procedures, whether at the corporate, local, or global levels. Most importantly, decisions made collectively now may have a significant impact on elements of the natural world and future generations of humanity. But because they are unable to speak up for themselves, they are frequently left “voiceless.”

As artificial intelligence (AI) systems become more and more ingrained in our daily lives, we are seeing that certain AI systems have the ability to elevate and/or bring attention to the viewpoints of these previously underrepresented stakeholders. Certain AI system classes, most notably Generative AI (ChatGPT, Llama, Gemini), can provide multi-modal outputs (text, video, and audio) that can serve as a stand-in for previously unheard sounds.

We refer to these outputs together as “AI Voice” here, indicating that the human-friendly outputs of these AI systems provide possibilities for the previously unheard in decision-making scenarios to express their views or voice. However, before AI Voice can fulfill its potential, it must first question how voice is granted and denied in our group decision-making processes and consider how the new technology has the potential to upset the status quo. The “right to decide” and the “right to voice” are two very different things, and it’s crucial to understand this difference when thinking about the various roles AI Voice could play, from passive collaborator to active facilitator. This is one extremely viable and hopeful way to use AI to build a more just society in the future, but doing so responsibly will call for careful planning and much more discussion.

AI Voice’s Promise

Scholars and practitioners of social innovation have emphasized the potential of artificial intelligence (AI) to boost creativity and efficiency in decision-making. However, a lot of people have legitimately expressed worries about biases, fairness, and a loss of human control. We argue that, in addition to these advantages and drawbacks, artificial intelligence (AI) presents a way to advance social good by providing a voice to underrepresented parties in group decision-making.

In the past, group decision-making has been prized for its capacity to integrate a variety of viewpoints, producing conclusions that are frequently more complex and all-encompassing than those made by one person. Organizations such as the United Nations, which encourages multi-stakeholder collaborative decision-making to create more flexible, feasible, and comprehensive solutions within the context of the Sustainable Development Goals, have supported this strategy.

Traditional group decision-making, however, has drawbacks and frequently excludes stakeholders due to convenience, accessibility issues, or discrimination. Even in cases when decision-making procedures strive for maximum inclusivity, significant viewpoints from groups that lack voice—such as the environment, animals, and future generations of humans—may nevertheless be excluded. These later stakeholders are vital to our society (and its future), and excluding them from group decision-making may lead to solutions that are shallow and exclusive. In the context of the climate debate, for instance, choices made at the national and organizational levels are frequently motivated by short-term concerns rather than long-term effects on the environment and future generations of humans.

In this context, “AI Voice” stands out as a creative solution that offers a method of using AI to benefit society by gathering, analyzing, and expressing the viewpoints of stakeholders who choose to remain silent. “AI Voice” was first used (in a paper co-authored by Andrew Sarta and myself) to refer to AI-driven recommendations that addressed the issues of frequently disregarded business stakeholders, such as customers. However, the term can now be more broadly understood to refer to AI-generated human-friendly outputs that allow AI systems to function as stand-ins for those unheard stakeholders. Consequently, this word refers to the output of certain AI systems, which can be used to offer a voice to stakeholders who have not yet been heard, rather than a particular good or service.

Several for-profit companies have already adopted this idea of AI Voice into their decision-making processes after seeing the potential of leveraging AI systems to improve decision-making. Einstein AI started attending Salesforce’s weekly meetings in 2018 and provided executives with visual sales and client insights derived from CRM data. Dictador, a Polish rum manufacturer, and Tieto, a Finnish IT company, have both made audacious moves in their organizations by appointing AI entities to important leadership positions. In a similar vein, some institutions are creating and implementing AI technologies to support collective sense-making, discussion, and ultimately decision-making.

In contrast, consortia and nonprofit organizations have been rather sluggish in integrating AI technologies into their leadership and general decision-making processes. The WWF’s recent “Future of Nature” project is one noteworthy exception. Using artificial intelligence (AI), this London installation projected numerous possibilities of human involvement in the environment of the United Kingdom. Here, the viewpoints and difficulties of nature have been expressed through the use of AI Voice. It served as a narrator, graphically portraying the dire possible state of the environment if the current rate of degradation continues. It also showed the positive changes that can happen if quick corrective action is taken. This AI-generated output has also been utilized as a discussion tool, enabling activists and policymakers to examine and debate the possible outcomes. In this sense, artificial intelligence (AI) has evolved from being a simple forecasting tool to an active agent that supports the creation of environmentally sound plans. But there’s also room for more AI involvement. For example, a speech system may directly contribute to these stakeholder talks by offering the natural environment a voice in the conversation.

This creative application of AI highlights the wider potential of generative AI, which is leading the way in bridging the perspective gap in group decision-making. In contrast to conventional AI, which concentrates on classification and evaluation, generative AI can produce seemingly original and imaginative results.

The CEO and co-founder of DeepMind Technologies, Demis Hassabis, claims that the results of generative AI fall into three different categories:

Interpolated Output: New data points or content created by generative AI that falls within the training dataset’s pre-observed data point category is referred to as interpolated output. The resemblance between this output type and the examples the AI was taught defines it. An AI that has been trained to categorize and create new cat images using a training corpus of cat photos could serve as an example.

Extrapolated Output: In contrast, extrapolated output happens when an artificial intelligence generates content by going beyond the specific parameters of its training data. This kind of output entails conclusions or forecasts regarding data points that are not part of the current training set. Extrapolated outputs, while still based on the underlying patterns learned during training, are more speculative, stretching the bounds of the AI’s taught context to provide potentially more creative and unpredictable content. One example might be AlphaGo, a computer program that plays millions of rounds against itself while using all of human knowledge on the board game Go. Move 37 in Game 2 of AlphaGo vs. Lee Sedol in Seoul, 2016, for instance, is an example of how AlphaGo extrapolates new strategies never seen before in its training corpus using the understanding gained throughout that process.

Invented Output: When a generative AI generates completely original content, it does so without using any information from its training dataset. This kind of output demonstrates the AI’s capacity for creativity beyond learned patterns. A chess-trained AI system could create the game Go as an example.
The outputs from the majority of commercial generative AI programs fall into the first two categories: extrapolated and interpolated. In part, this is due to the lack of efficient ways to communicate to AI systems the requirement for highly creative, ‘created’ outputs. At the moment, AI finds it difficult to comprehend and respond to very abstract commands (such as creating a game that is simple to pick up but requires sophisticated skill and may be finished within a fair amount of time). Because of this, these systems frequently produce work that lacks a high degree of inventiveness. This restriction emphasizes the value of subject matter specialists who can interact and comprehend abstract notions, such as biologists or climate advocates. They play a critical role in bridging the gap between AI capabilities and complicated task requirements by giving context and understanding AI Voice.

The Various Functions of AI Voice

Not every AI output, including AI Voice, has the same function in a group decision-making process. For instance, an AI system may produce outputs that help arrange multi-stakeholder discussions or that provide hitherto unheard insights that support intricate deliberations, contingent on the system’s degree of integration into collaborative decision-making processes.

Using a box matrix with the two types of rights—voice rights and decision rights—that humans may provide in the particular process is a helpful method to categorize these possible roles for AI Voice. AI is able to provide insights and analysis on strategic and operational issues because to voice rights. On the other hand, decision rights enable AI to participate in the last stages of decision-making, such as voting on proposals. This distinction points to four distinct functions that AI Voice can play in processes involving group decision-making: 1. Consultant; 2. Organizer; 3. Collaborator; and 4. Facilitator.

AI systems functioning in the facilitator role have no decision-making or voice rights. Their purpose is to serve as organizing tools rather than to produce debate topics. In this capacity, an AI system may, for instance, gather input before a meeting from numerous stakeholders. Consequently, the AI Voice output may contain an agenda designating special time to address the issues of these silent stakeholders, acting as a stand-in for those who are usually ignored. AI Voice might also be trained to run the meeting, monitor time constraints, and direct conversation in a targeted manner. This strategy aids in ensuring that the decision-making process continues to be impartial and structured.

Consultant: AI systems are given the ability to speak but not make decisions when acting as consultants. They do not have the power to make final decisions, but they can advise decision-making processes by analyzing and interpreting large amounts of data. Take into consideration, for instance, an AI system entrusted with assessing how urbanization affects nearby ecosystems. This system would model several development scenarios, assess environmental data, and provide results and suggestions to reduce ecological damage. However, human stakeholders have the last say in decision-making. In this role, AI Voice helps stakeholders make better decisions by offering comprehensive, data-driven environmental analysis.

Optimizer: An AI system that performs the function of an Optimizer has decision-making rights but no speech rights. The AI is in charge of determining the most efficient course of action based on objectives and limitations; it does not, however, make recommendations or express opinions to affect results. Within the parameters of predetermined efficiency metrics and criteria, the AI systematically evaluates the data submitted by stakeholders to arrive at the best possible distribution of resources or the most efficient fixes for issues. An AI system might be used, for instance, by an environmental organization to distribute funds for conservation efforts. Every project would be evaluated by the AI based on important ecological performance measures, like possible enhancements to habitat quality or species preservation. After that, the AI Voice would allocate money in a way that maximizes environmental results, guaranteeing that decisions are made impartially and without bias.

Collaborator: AI systems that are designated as Collaborators are given the same voice and decision-making rights as human members, enabling them to actively engage in the decision-making process. In this case, AI Voice participates in conversations, makes initiative suggestions, and even vote on choices. It takes part in discussions, makes proposals, and casts votes on resolutions. An AI voice system coupled with a wildlife conservation board can be responsible for the analysis and presentation of data pertaining to species that are endangered. Based on its studies, this AI would suggest ways to safeguard these species on behalf of the wildlife. Moreover, it would participate in the voting process, affecting the distribution of funds to guarantee that the best wildlife preservation tactics are given priority and a vote.

While AI Voice’s Optimizer and Collaborator roles appear promising, giving AI systems the ability to make decisions raises several difficult ethical issues. It is crucial to evaluate rigorously how and to what degree humans should assign AI decision-making duties. Examining the changes in ethical and legal obligations that come with using AI for decision-making is part of this. Strong frameworks that will efficiently control AI behavior and guarantee that its incorporation into decision-making complies with legal and ethical constraints are also necessary.

Techniques for a Conscientious AI Voice

Three techniques are proposed here: transparency, AI literacy, and stakeholder and domain expert involvement to enable responsible use of AI Voice and address its inherent problems. These tactics, however, ought to be seen as catalysts for an ongoing dialogue about ethical AI that works for the greater good rather than as set-in-stone guidelines.

To avoid perpetuating preexisting biases, stakeholder and domain expert input is essential to the development of AI and the use of any AI Voice output. The AI training process is enhanced by adding a wider range of perspectives and input, so providing the system with a more comprehensive collection of varied experiences and opinions for training. This kind of inclusivity is a purposeful tactic to guarantee perspective authenticity and avoid distorted tales. Without such a wide range of input, for example, AI-generated content can deviate from accepted moral or social norms, which could cause harm or confusion. Furthermore, even though AI Voice can mimic speech patterns, it lacks an innate understanding of the nuanced human experiences that it is designed to replicate.

Domain specialists are therefore crucial for examining AI Voice outputs, providing guidance on system improvements, and assisting in the identification and correction of biases. Their knowledge guarantees that AI develops in a way that is consistent with more general social ideals and enhances decision-making. It is important to assign voice and decision powers to AI systems, and this crucial responsibility should fall to these stakeholders and domain experts. In situations where decisions are made collectively, this choice determines the AI Voice’s degree of authority and accountability.

Second, openness is crucial when it comes to the usage of AI, particularly when it comes to the sources of its training data and use cases. AI’s effect and the possible scope of its decision-making are made clearer by the open disclosure of its usage and locations. Analyzing the provenance and makeup of training data is also essential to comprehending the AI’s possible biases and constraints. This degree of openness enables stakeholders to foresee and comprehend AI behavior, which can avert issues with trust and responsibility. Transparency, for example, can stop AI from being carelessly used in settings where its conclusions could have unforeseen effects or for purposes for which it was not designed.

Additionally, it makes it easier to spot any overrepresentations or holes in the training set, which, if ignored, could produce artificial intelligence outputs that are biased. Transparency also helps AI systems by increasing user confidence and enabling a more informed and cooperative incorporation into decision-making procedures. It fosters an atmosphere in which people with the requisite knowledge can more easily examine, improve, and, if needed, fix AI tools.

Third, people must become more data-literate and AI-savvy. It guarantees that people and organizations making decisions with AI are equipped with the knowledge and skills needed to work with the technology. Users who are familiar with AI’s data processing capabilities can identify its advantages and spot potential weaknesses, such as when AI outputs deviate from expected norms.

Users who are knowledgeable in AI can challenge the reliability of the system’s findings and identify any potential flaws in its judgment. To ensure that AI technologies are used effectively and that their conclusions are sensibly integrated into overall strategies and operations, it is imperative to have this expertise. Additionally, by advocating for data correctness and integrity, users may support the continuous development of AI systems by having a thorough grasp of the data that drives AI models.

When the right safeguards are put in place, AI has the potential to be a positive force. Important first steps toward responsible usage of AI are ensuring data transparency and enhancing user comprehension of the technology. It is equally important to involve different stakeholders in the design process of AI systems, especially when giving AI systems decision-making and voice rights. We may use AI systems and AI Voice to overcome the “perspective deficit” in group decision-making and build a fairer future where future generations of humans and the environment can coexist more peacefully by combining these technological and participative tactics.

Leave a Comment