Concerns over data security and privacy are shared by 80% of AI decision makers

Organizations are excited about generative AI’s potential to boost employee and corporate efficiency, but they are unable to fully realize its promise due to a lack of expertise and a lack of strategic planning.

This is supported by a survey of 300 US GenAI strategy or data analytics decision makers done in early 2024 by Coleman Parkes Research and funded by data analytics company SAS. The purpose of the poll was to identify key areas for investment and the challenges that organizations are facing.

“Organizations are realizing that large language models (LLMs) alone don’t solve business challenges,” stated SAS strategic AI advisor Marinela Profi.

Rather than being viewed as the new shiny toy that will help organizations realize all of their business objectives, GenAI should be viewed as an ideal contributor to hyperautomation and the acceleration of current processes and systems. Before diving in headfirst and becoming “locked in,” all organizations should invest in technology that allows integration, governance, and explainability of LLMs and spend the necessary time to build a progressive plan.

Organizations are encountering obstacles in four crucial domains of execution:

  • Achieving compliance and boosting trust in data usage. Just ten percent of organizations have a robust strategy in place to assess the risk of bias and privacy in LLMs. Furthermore, the majority of U.S. corporations are at danger of noncompliance with regulations, and 93% of them lack a thorough governance structure for GenAI.
  •  Including GenAI in current procedures and systems. Organizations admit that integrating GenAI with their present systems is giving them compatibility problems.
  • Ability and know-how. Internal GenAI is deficient. Organizational executives are concerned that their HR departments may not have access to the skills needed to maximize their investment in GenAI as a result of a shortage of qualified candidates.
  • Estimating expenses. Leaders point out that employing LLMs comes with expensive direct and indirect costs. The token cost estimate provided by model creators is now recognized by organizations as being prohibitive. However, preparing private information, training, and managing ModelOps come at a long and complicated expense.

Profi continued, “It will ultimately come down to identifying real-world use cases that solve human needs in a sustainable and scalable manner and deliver the highest value.”

“With this study, we’re keeping up our commitment to support organizations in remaining resilient, relevant, and financially savvy. Being able to adopt the resilience rules is crucial for maintaining a competitive edge in an era where AI technology is evolving virtually daily.

The study’s findings were announced today at SAS Innovate, a conference on artificial intelligence and analytics hosted by SAS Software for partners, business executives, and technical users in Las Vegas.

Leave a Comment