Generative AI Is All About the Money

 

The commotion at OpenAI serves as just another reminder of Silicon Valley’s obsession with money. The capitalists would be in charge once OpenAI received hundreds of millions of dollars in for-profit investment funds. Given the drive to “do more AI,” it is imperative that social innovators understand these dynamics.

A large portion of the excitement and focus surrounding generative AI is determined by financial goals. The financial valuation of generative AI businesses like as OpenAI is greatly enhanced by portraying AI as such an immensely powerful and interesting technology that it poses an existential threat to humanity, even if it is still far away from that point.

The nonprofit board members who attempted to establish what they perceived to be their role ultimately lost the leadership struggle at OpenAI last year, even though it was most likely not started with a pecuniary motive. Naturally, OpenAI’s nonprofit goal was unusual: to guarantee the safer development of artificial general intelligence or the development of our future robot overlords. It is now obvious that assisting themselves in making money has taken precedence over aiding mankind, as seen by the CEO’s reinstatement and the appointment of a new board member who is perceived as being much friendlier to investors (particularly Microsoft, OpenAI’s key tech firm partner).

OpenAI has not been harmed by the incident. It is said that their next round of funding is predicated on a valuation exceeding one hundred billion dollars. This seems like the classic Gartner hype cycle event, an investment bubble. Not every organization should halt what it is doing and utilize charitable funds to invest in generative AI, just because there is a frenzy among investors and businesses about being wealthy with this technology. Allow the investors to give up billions of dollars to support these businesses as they look for the most lucrative uses for their inventions. Even while technology is incredible, social impact leaders must assess it objectively and in light of current demands.

Improved technology is desperately needed in the social good industry, especially in the areas of fundamental software, data enhancements, and even ordinary artificial intelligence. It’s unclear if allocating limited nonprofit resources wisely to generative AI first rather than more fundamental technological requirements. We shouldn’t support well-funded AI businesses that concentrate money; instead, we should continue to focus on our social impact goals to serve people and the environment.

AI to Replace Humans?

The amount of money being invested in the newest AI technologies cannot be justified unless they are extremely profitable, as noted by tech commentators such as Cory Doctorow. Additionally, replacing people with machines is the only realistic path to enormous wealth. Even while the most recent AI technology is amazing, it is not yet prepared to completely replace people. Even if it were, it’s not quite evident why massive employment losses are advantageous to society.

It is problematic to replace human workers with robotic systems since AI systems aren’t very intelligent. They lack compassion, empathy, or judgment. Not that attempts to replace humanity haven’t been made!

Although self-driving cars have been anticipated for years, the cost of this experiment was made evident when a Cruise self-driving taxi in San Francisco injured a pedestrian by pulling over 20 feet to park at the side of the road after colliding with an object, ignoring the fact that the object was dragging a woman beneath the vehicle.
The National Eating Disorders Association replaced their unionizing human counselors with Tessa, a chatbot powered by generative artificial intelligence. A week later, Tessa was predictably discovered to have been sending hotline texters the exact opposite advice that a competent counselor for weight disorders would provide today. This is most likely due to the poor quality of the typical advice on weight disorders that can be found online, where chatbots obtain the information they need to train themselves. What was the outcome? People in need were let down by the closure of the helpline, and the organization suffered severe reputational harm.
The unavoidable expense of AI solutions’ errors is a glaringly overlooked feature. There are several instances in the media of AI falling short of expectations, and no AI technology is flawless when used in practical settings. Selecting applications where the cost of an AI error is low or can be actively addressed by humans is critical for the social sector. Applications like Tessa’s should be avoided if a mistake could have a major negative impact on your firm or your stakeholders. Keeping a human involved in the process is the greatest way to identify and correct problems. Better yet, consider how you may apply AI technologies to enhance the intelligence, potency, and effectiveness of the people in your organization and those you assist. Never trust an unmanaged robot with situations involving life or death!

How Should a Social Innovator Proceed?

Don’t fall for the hype, first of all. The tech world was similarly excited about blockchain seven years ago. As far as I can tell, none of the excitement around blockchain technology has translated into large-scale social impact. I won’t even begin to discuss the metaverse! Consider carefully if creating an AI application would need entering information from the marginalized communities you support into a profit-driven database model that would allow these businesses to violate the rights of the underprivileged. Failure to provide for these people is not a deliberate aim of Silicon Valley; rather, it occurs frequently as a result of the unrelenting pursuit of profits. The most important duty nonprofit organizations have, in contrast to for-profit tech firms, is to behave morally and in the best interests of the people we serve. Keep the data from your communities out of the hands of for-profit companies.

Stop, gaze, and listen for a second. Refuse to follow industry advice to search for nails by circling around with an AI hammer. Projects that are started with more attention to the technology to be employed than the actual necessity are usually doomed from the beginning. Consider using the best and most reasonably priced technology available for the task at hand to solve your actual challenges rather than relying just on AI. When your peer leaders provide case studies of AI-based technology that have succeeded, pay close attention to the instances when it has failed. This is even more crucial. Additionally, don’t pay attention to techies or businesses that make outlandish promises of success. Not for applications with a societal impact, anyway.

Third, experiment and begin slowly. Generative AI programs are widely accessible, reasonably priced, or free, and can be quite helpful when helping with writing assignments. If you are not aiming to replace your workforce in bulk, you are probably going to get some value out of the basic products about their price. They are not prepared to supplant people.

Nearly the majority of NGOs lack the personnel necessary to develop their own AI solutions. Due to the high salary of data scientists, investing in custom AI deployments might be highly costly. To justify spending consultants hundreds of thousands of dollars to create anything for your company, the rationale for investing in AI must be extremely strong.

Examples of Generative AI Tools in Practice with Social Impact

As a seasoned AI specialist, I adore the capabilities of AI. Even while OpenAI and its contemporaries are currently creating an excessive amount of excitement, conventional AI has a far better track record of genuinely adding value in the social sector. Simply said, only 5–10% of the flashy applications that are being discussed these days would likely benefit from AI. Developing effective applications is made easier when ethics and mission are kept front and center. Here are few instances:

Spell-Checking on Medications

Initially, ChatGPT and its growing relatives and rivals have been mockingly referred to as “spicy auto-complete” and “stochastic parrots.” They are what I call “spell-checkers on steroids.” That can sound mocking as well, but I mean it in a good way. If you think that a contemporary spell checker is a necessary writing tool, just think of a five- or ten-times more potent version for certain writing assignments!

My nonprofit’s co-founder, Joan Mellea, estimates that ChatGPT helps her save 20–25% of her time when it comes to writing assignments. It’s quite helpful for fitting a 300-word response to a grant question into a 250-word word restriction. or simplifying an essay or explanation that a team member has written. She has created policies with it in order to abide by funder or government standards. One important thing to note is that Joan never trusts ChatGPT’s raw output, much like she wouldn’t trust a spell checker. As opposed to a spell checker, which you can choose to accept or reject its suggestions, Joan uses ChatGPT to generate ideas for clearer sentences. In summary, it’s an excellent resource for individuals who comprehend the subject matter and seek a tool to enhance their communication skills. For someone who doesn’t know what they are writing about, on the other hand, it will be a major problem because they will probably miss the mistakes.

The Side Guide

All too foreseeable were the issues with the Tessa chatbot on the helpline for weight disorders. It would be unethical to force big language models, like the ones underlying ChatGPT, to assist individuals in need when they don’t know what to say or cannot provide assistance to persons in distress. I constantly worry that someone may come out to me about potentially hurting themselves and that a chatbot with too many options will push them to do so.

Though I have been working in the helpline movement for the last five years, it is easy to envisage numerous intriguing applications of AI when one considers the cost of errors. For instance, volunteers work at the Danish child helpline Børns Vilkår. For their volunteers, they have developed an AI called “Guide by the Side” that observes the chat exchange between a volunteer and a young person in need of counseling. Reminding the texter of their rights during a parental divorce and providing health statistics are only two of the helpful ideas that the AI guide displays to the volunteer when it identifies up to three discussion subjects (parents divorcing, concerns about COVID-19, and substance misuse). When the AI guide brings up an irrelevant point, the volunteer chooses to disregard it.

An additional noteworthy instance is the Trevor Project, which had a backlog in the training of volunteers for their LGBT adolescent support hotline. Their rapid development and the anticipated turnover of volunteers meant that they required more human trainers than they already had. To replicate a young person contacting a counselor, they developed an AI-driven conversational simulator for training. Training sessions for new volunteers would involve the AI chatbot acting out the part of a help-seeker. Even if the AI conversation simulator made a mistake, it was unlikely to harm a real LGBT young person in need of support. The participants would go from using the chatbot for practice to training sessions with a human trainer to ensure they were prepared to engage in actual counseling talks. Compared to when all training sessions were conducted by human trainers, Trevor was able to instruct a far larger number of participants.

Additional Excellent AI Cases

Other NGOs are employing generative AI techniques for user support in addition to fundraising and helplines. The responsible applications are more closed-ended than open-ended chatbots, which can be asked anything and can end up saying anything!. This indicates that the conversation’s subjects are restricted to the work at hand. It is not a risky application, for instance, if your website has 100 assistance pages and a chatbot is only permitted to direct users to those articles. An error costs the user by displaying a help article that isn’t very helpful.

Naturally, the latest advancements in AI, generative AI, are the foundation of the OpenAI craze. Numerous additional AI applications have already been widely implemented. I began working at Benetech, which made reading devices for the blind using cutting-edge AI technology thirty years ago. AI is being used by MapBiomas, a Brazilian organization that won the Skoll Award, to study land usage based on satellite imagery. In a day or two, they should be able to identify a new logging road leading into a protected rainforest, which should help curb illicit logging. For farmers and ranchers to rapidly understand what can grow in a given field, my team at Tech Matters is even designing an app for recognizing soil types using a very rudimentary level of artificial intelligence.

In summary

A key component of moral and practical action for social change leaders is their duty to the people they represent. In contrast to the commercial IT sector, our North Star is focused on bringing about positive change rather than profit. We have a responsibility to use new technologies carefully and with the best interests of our communities in mind. Without a doubt, artificial intelligence (AI) will contribute significantly to social transformation, but not this year and not in the way that the industry has promised. I hope you will work with me and other nonprofit tech professionals to ensure that AI is used morally to achieve the greatest possible good in society.

Leave a Comment