30% Asian Organizations Lack GenAI Policies, Prone to Security Risks

30% Asian Organizations Lack GenAI Policies, Prone to Security Risks
30% Asian Organizations Lack GenAI Policies, Prone to Security Risks

Generative AI is rapidly changing the world and is expected to grow with a CAGR of 25.8% and reach an estimated USD 136.49 billion by 2032 globally. Asia has been at the forefront of this revolution. However, as we bask in the promises of AI, there exists tons of challenges and uncertainties that demand our attention.

Recently, the Generative AI 2023: An ISACA Pulse Poll found that over 42% of employees already use generative AI, even without policies in place. This suggests that there is a high level of interest in generative AI among employees, but that organizations are not yet fully prepared to support its use.

Read on as we uncover the key findings from the survey.

Lack of Policies and Training

The survey reveals a surprising gap – only 32% of Asian organizations have explicit policies for using Generative AI and only 11% have a formal comprehensive policy in place. Around 30% said neither they have a policy nor there are any plans on making one.

However, over 42% of employees already use generative AI, even without policies in place. This percentage is very likely to increase and go much higher because 30% of the respondents aren’t sure if employees are using AI. This suggests that there is a high level of interest in generative AI among employees, but that organizations are not yet fully prepared to support its use.

These employees in Asia are using generative AI in a variety of ways, such as creating written content (67%), increasing productivity (41%), improving customer service (30%), automating repetitive tasks (28%), and improving decision making (23%).

Even though Generative AI is changing the game altogether, only 5% of organizations provide AI training to their staff. Over half (52%) stated that teams directly affected by AI receive no training whatsoever. Merely 23% of participants reported having a substantial understanding of generative AI. For startups, this emphasizes the need for clear guidelines and continuous education within their teams.

“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organizations need to catch up in providing policies, guidance, and training to ensure the technology is used appropriately and ethically. With greater alignment between employers and their staff around generative AI, organizations will be able to drive increased understanding of the technology among their teams, gain further benefit from AI, and better protect themselves from related risk,” said Jason Lau, ISACA board director and CISO at Crypto.com.

Ethical Concerns and Risks

The poll also delved into ethical concerns and potential risks linked to AI. About 29% of respondents expressed concerns about insufficient attention to ethical standards in AI implementation. In terms of risk management, 25% of organizations view handling AI risks as an urgent task, while 31% perceive it as a longer-term priority. Surprisingly, 29% of organizations have no plans to address AI-related risks.

Ethical considerations and the risks that come from AI are at the forefront, with 65% of respondents citing misinformation as a major risk. Following this are privacy violations (64%), and social engineering (48%) as the top risks associated with Generative AI.

Furthermore, the poll highlighted significant concerns about the exploitation of generative AI, with 45% of respondents expressing substantial worry about its misuse by bad entities. Additionally, 65% of participants observed that adversaries are leveraging AI as effectively, if not more so, than digital trust professionals.

“AI training and education is imperative for digital trust professionals, not only to be able to understand and successfully leverage the technology, but to also be fully aware of the risks involved. As quickly as AI has evolved, so have the ways that the technology can be misused, misinterpreted, or abused, and professionals need to have the knowledge and skills to guide their organizations toward safe, ethical, and responsible AI use,” says RV Raghu, ISACA India Ambassador and director, Versatilist Consulting India Pvt Ltd.

These findings emphasize the critical need for organizations to address ethical considerations and proactively manage the risks posed by AI, especially in the face of escalating concerns regarding its misuse and exploitation.

Impact on Jobs

Analyzing the current roles intertwined with AI in Asia, it becomes evident that individuals working in security (52%), IT operations (46%), risk teams (44%), and compliance teams play pivotal roles in ensuring the safe integration of AI technologies.

In the near future, a considerable portion (24%) of respondents foresee their organizations creating new job positions related to AI functions. While 57% anticipate job losses due to AI, digital trust professionals are hopeful about their own roles, with 71% believing AI will bring positive changes in their roles.

However, 86 percent of respondents from Asia feel they need extra training to keep their jobs or progress in their careers, highlighting the need for ongoing learning in light of AI advancements.

The Road Ahead

Despite the uncertainties and potential risks associated with AI, a significant number of respondents believe AI will have a positive or neutral impact on their industry (84%), organization (84%), and career (83%).

Furthermore, a substantial 84% of respondents agree that AI serves as a tool to enhance human productivity. 76% of those surveyed believe AI will bring positive or neutral changes to society at large.

This overwhelming consensus highlights the view that AI technology acts as a valuable asset in empowering individuals to be more efficient and effective in their endeavors, not only in professional spheres but also in shaping a positive societal impact.

AI ethics and fairness are important aspects of developing and deploying AI solutions. According to a white paper by Fujitsu, AI ethics impact assessment is a process of assessing the ethical impact of AI on people and society before it is provided. It enables developers, providers, and customers of AI systems to understand the potential benefits and risks of using AI and to take appropriate measures to mitigate or prevent any negative impacts.

About ISACA

ISACA is a global organization dedicated to advancing digital trust. With over 165,000 members worldwide, ISACA provides knowledge and training in areas like information security, governance, risk, and privacy. Operating in 188 countries, it supports careers, transforms organizations, and promotes a trusted digital world. Through its foundation, One In Tech, ISACA also fosters IT education for underprivileged communities.

Join ProdWrks Today!

Let’s join hands and build a network of brilliant product visionaries!

Enter your details to register

Enter your details to register

Enter your details to register