BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Five Techniques To Ensure Reliable And Honest Use Of Generative AI

Forbes Technology Council

President & CTO at ContractPodAi.

It's no longer a stretch of the imagination to see how generative artificial intelligence (AI) brings value to modern enterprises. Streamlined work speeds up organizational processes, accurate collection of data reduces human errors and advanced analytics facilitates strategic decision-making.

With senior leaders, business users and customers becoming more aware of these benefits, the deployment of the technology within organizations is happening at a rapid pace. In 2022 alone, according to research published by Statista, large-scale generative AI adoption was more than 20%; by 2025, it is expected to reach nearly 50%.

Nevertheless, it's important for companies to take a breath and a step back amid this wave of adoption in the corporate workplace. Early adopters of technology of this caliber often reap significant benefits, and there's much urgency to adopt this as a means of getting ahead competitively and increasing operational efficiencies.

However, early adopters must be prepared for unexpected outcomes and be ready to make adjustments as the technology continues to evolve. The implementation process will determine the success of generative AI, and a well-thought-through approach is essential.

Below are five ways to ensure reliable (and honest) use of generative AI for your business today and in the future.

Apply A Foundational Layer For AI

First, the output of the AI system should make sense for the task and, more broadly, your industry. For instance, if you are using generative AI for legal purposes, make certain that the system is only generating high-quality outputs that are relevant to legal departments and users.

Tailoring the output toward the specific industry helps build trust and mitigates the risk of outputting irrelevant content. This helps drive better business outcomes in the present but also plays a large role in contributing to shaping the application and direction of AI technology moving forward.

Trust, But Verify

Generative AI's insights and suggestions should be considered as supplementary resources instead of a substitute for human knowledge. It's important for users to be aware that this technology (and any technology in general) can occasionally produce inaccuracies; therefore, generative AI should be treated as a "supportive copilot" instead of an autonomous agent.

The system's outputs should be thoroughly reviewed rather than looked at as a substitute for human review and decision-making. Additionally, while AI can process data, we must be mindful that this technology lacks human contextual understanding and be aware of the wider context when interpreting and applying its outputs.

Be Prepared To Troubleshoot

Despite generative AI's potential for enhancing business operations globally, one potential flaw of the technology is what's known as a "hallucination," where the large language models (LLMs) may tend to generate inaccurate or unexpected outputs. When faced with this, users can try different prompt formulations or redraft the prompt in alternative ways and resubmit it for improved results. Always cross-verify information and be aware of this inherent aspect of AI systems.

Raise Organizational Awareness

By helping users understand the proverbial "do's and don'ts" of AI and the limitations of its usage, they will then be more comfortable engaging with this technology and trusting it. In other words, better AI literacy leads to more informed decisions about the technology's usage and increased engagement with AI-generated content.

Legal teams and HR leaders should prepare policies and training on using generative AI and new technologies, ensuring that team members throughout the organization are aware of the benefits and negatives of using such tools.

Increase Technical Supervision Of AI And Its Users

Without proper controls and oversight, generative AI can have a negative impact on the business. Think about the proliferation of unsafe, inaccurate or out-of-date information. To combat this, administrators can put into place permissions as to who can access AI-generated content as well as establish parameters and rules around the types of outputs the AI can—and cannot—deliver. This serves to provide strict control over user and system actions as well as enterprise security.

As generative AI models become more widespread in industries, the technology's adoption calls for the critical steps mentioned above and other best practices.

Implementing proper frameworks and guardrails around the technology will go a long way toward preventing the dissemination of unsound information and the misuse of systems altogether. This added level of security will create trust among senior leaders, business users and customers—those who are cautious about deploying the latest digital tools and taking advantage of their inherent capabilities.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website