{"id":52392,"date":"2024-06-20T07:21:25","date_gmt":"2024-06-20T11:21:25","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=52392"},"modified":"2024-06-19T12:25:03","modified_gmt":"2024-06-19T16:25:03","slug":"5-keys-to-building-trustworthy-ai-and-closing-the-trust-gap","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/5-keys-to-building-trustworthy-ai-and-closing-the-trust-gap\/","title":{"rendered":"5 Keys To Building Trustworthy AI And Closing The Trust Gap With Employees"},"content":{"rendered":"

In this segment of \u201cOffice Optional with Larry English<\/a>,\u201d Larry discusses five important way to build trustworthy artificial intelligence.<\/h2>\n
\n

Despite all the buzz about the potential for artificial intelligence to transform organizations for the better, workers aren\u2019t yet seeing the value. A 2024 Workday report<\/a> found that only a little more than half of employees embrace AI at work.<\/p>\n

The trust problem \u2014 referred to as the AI trust gap \u2014 goes deeper than employees mistrusting the technology itself. When implemented carelessly, AI can also degrade the trust workers have in their employer. To this point, the Workday report found that less than a quarter of employees are confident that their employers prioritize worker interests when implementing AI. It\u2019s well-documented<\/a> that companies with lower employee trust have lower engagement and higher attrition.<\/p>\n

This doesn\u2019t mean leaders should shy away from AI in the workplace<\/a>.<\/strong> Rather, leaders must first ask themselves \u201cWhat is trustworthy AI?\u201d and then craft an AI implementation strategy with two goals: Build employee trust in AI technology. And, maintain the trust of employees in the organization.<\/p>\n

Building Responsible and Trustworthy AI is a Major Hurdle<\/h2>\n

The AI trust gap is a major hurdle for widespread adoption of AI<\/a> across an organization. Employee distrust in AI can take different forms, many of which are based in fear: Will AI take my job? Can I trust the output from an AI tool? How does it work (also known as the black box problem)? Is my leadership paying attention to the risks? What is the real value AI brings? And so forth.<\/p>\n

A lot of decisions of when and how to use AI comes down to the specific use case. For example, using AI to analyze facial expressions and monitor employee behavior has employees wary, according to a Pew Research Center study<\/a>.<\/strong><\/p>\n

Kelly Trindel, chief responsible AI officer at Workday, has a different take: \u201cAt Workday, we are guided by our core values of innovation and integrity. We take a risk-based approach to responsible AI governance, understanding that AI can absolutely be developed and used to amplify human potential and positively impact society.\u201d<\/p>\n

5 Tactics for Creating Trustworthy AI at Your Organization<\/h2>\n

If you\u2019re aiming for widespread adoption of AI in your organization, you\u2019re going to need a game plan for closing the AI trust gap. Here\u2019s how to get started:<\/p>\n

Give employees space and time to experiment with AI tools.<\/h3>\n

People can\u2019t get comfortable with AI if they never get a chance to experience how the technology works firsthand. Whether you build an internal AI learning program or merely provide small opportunities for employees to play around with AI tools to see how it could help ease their day to day, it\u2019s important to encourage a baseline of familiarity with the technology.<\/p>\n

For instance, we, have an AI task force for sharing news, ideas, trends and findings around AI. The goal is to give everyone a better understanding of AI to decrease any fear around the technology.<\/p>\n

Share information about responsible AI principles and practices with your employees.<\/h3>\n

Employees want to know that their leadership has considered both the benefits and potential drawbacks associated with AI, and that they are taking a thoughtful, measured approach to AI innovation<\/a>.<\/strong><\/p>\n

\u201cAt Workday, we perform a risk evaluation on all new AI use cases, and based on the results we specify needed guardrails which must be addressed before the AI can move forward,\u201d according to Trindel. \u201cWe\u2019re currently engaging in some internal research to determine how to guide future training and awareness efforts to be sure that our employees understand this program of safeguards.\u201d<\/p>\n

Work with employees to discover how AI can solve real employee problems.<\/h3>\n

Don\u2019t just implement AI for the sake of jumping on the hype bandwagon. Instead, anchor AI use in helping employees excel at their jobs<\/a> or lightening their load, thereby freeing them up for higher value tasks.<\/p>\n

\u201cOrganizations can improve employee trust in AI by rolling out tools that show clear value and solutions to existing problems,\u201d says Kathy Pham, vice president of AI and machine learning at Workday. \u201cAI should make the day-to-day easier, without burdening employees. By working across the organization to identify business problems and needs \u2014 then utilizing AI to address those challenges \u2014 that\u2019s where the real value resides. Open lines of communication around this across the company are critical.\u201d<\/p>\n

Transparently communicate your organization\u2019s plans for AI in the workplace.<\/h3>\n

Let employees know how and why AI will be used, how it will benefit them and make their jobs easier and your vision for how it\u2019s going to impact the company\u2019s growth and future success.<\/p>\n

And if your organization is using AI in the recruiting process, it\u2019s also a good idea to let applicants know up front to begin building trust from the first engagement.<\/strong><\/p>\n

\u201cTransparency from the very beginning is key,\u201d Bennett says, citing as an example New York City Local Law 144<\/a>, which requires employers to disclose the use of algorithms during recruitment, hiring and promotion and to submit the algorithm to audits to check for bias. \u201cThat kind of transparency goes a long way toward helping employees feel more comfortable about how AI is being used.\u201d<\/p>\n

Vet AI vendors carefully and share your decision-making process with employees.<\/h3>\n

How does the AI vendor treat data? Do you have control over how your data is used and managed? Are humans kept in the loop of technology and are there other guardrails in place to ensure responsible and ethical AI?<\/p>\n

Sussing out the answers to these questions can help leaders feel the AI they\u2019re investing in is trustworthy and, in turn, communicate that trustworthiness to employees.<\/strong> \u201cMake sure your employees know you\u2019re investing in an AI tool that does not compromise when it comes to privacy and security,\u201d Pham says. \u201cIf you want to build trust, make sure that the tool does what it says it does.\u201d<\/p>\n

Trustworthy vendors provide transparency to address these items and more. During AI implementation<\/a> and after, your AI vendor can help you close the trust gap by providing resources to help educate employees on the AI tool and how it works and collaborating with you to increase transparency. For example, Workday provides fact sheets and builds in notifications throughout its products on where AI-mined data is coming from.<\/p>\n

\u201cThis is a shared responsibility ecosystem,\u201d says Trindel. \u201cWe have a role to play as an AI developer, and our customers have a role to play as an AI deployer. We as the developer make sure that we are transparent as we can be to our customers. We\u2019re thinking more and more about how to provide educational information to our customers to clarify our perspective about this shared responsibility.\u201d<\/p>\n

AI Is Necessary, So Is Closing the Trust Gap<\/h2>\n

Organizations need AI to excel today and in the future, yet widespread adoption of AI isn\u2019t going to happen if employees feel fearful or unsure of the technology. To overcome the AI trust gap, you need employee education, opportunities for AI exposure and collaboration between leadership and employees to determine where it makes sense to layer in responsible AI.<\/p>\n

This article was originally published on Forbes.com.<\/a><\/em><\/p>\n

\n

\n
\n In our on-demand webinar, our artificial intelligence expert provides an executive\u2019s guide to what leaders need to know about adopting ChatGPT and AI in the workplace.\n <\/div>\n
\n \n\n View Webinar\n <\/a>\n <\/div>\n <\/div>\n

Are you ready to explore how artificial intelligence can fit into your business but aren’t sure where to start? Our AI experts<\/a> can guide you through the entire process, from planning to implementation. <\/em>Talk to an expert<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

In this segment of \u201cOffice Optional with Larry English,\u201d Larry discusses five important way to build trustworthy artificial intelligence.<\/p>\n","protected":false},"author":41,"featured_media":52400,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_oasis_is_in_workflow":0,"_oasis_original":0,"_oasis_task_priority":"","_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","footnotes":""},"categories":[1],"tags":[19112],"coauthors":[15095],"acf":[],"publishpress_future_action":{"enabled":false,"date":"2024-07-21 21:18:54","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"category"},"_links":{"self":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/52392"}],"collection":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/comments?post=52392"}],"version-history":[{"count":8,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/52392\/revisions"}],"predecessor-version":[{"id":52403,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/52392\/revisions\/52403"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/media\/52400"}],"wp:attachment":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/media?parent=52392"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/categories?post=52392"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/tags?post=52392"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/coauthors?post=52392"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}