{"id":44439,"date":"2023-07-03T07:00:08","date_gmt":"2023-07-03T11:00:08","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=44439"},"modified":"2023-07-07T15:21:50","modified_gmt":"2023-07-07T19:21:50","slug":"creating-a-chatgpt-content-policy-best-practices-for-using-chatgpt","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/creating-a-chatgpt-content-policy-best-practices-for-using-chatgpt\/","title":{"rendered":"How to Create a ChatGPT Content Policy: Best Practices for Using ChatGPT"},"content":{"rendered":"

ChatGPT can be a powerful time-saver for content creators, but only if you have the right policies in place to guide its use. Here are our thoughts on what to keep top of mind.<\/h2>\n
\n

Outside the education world\u2019s concerns about plagiarism and cheating, most worries about ChatGPT have been around how it will affect businesses and employees. Will I lose my job? Can we afford to invest in the paid ChatGPT Plus or other AI tools? Are my employees ready for it?<\/p>\n

But the education world\u2019s concerns with academic integrity are also relevant to your business. Claiming other people\u2019s work as your own or sharing inaccurate information because you haven\u2019t \u201cdone your homework\u201d have serious implications for your brand, your reputation and your financials.<\/strong><\/p>\n

Schools have rules about plagiarism and citing sources. In the mildest cases, students might be reprimanded and told to redo assignments. In more serious cases, they could receive a lower grade. At the college level, plagiarism could even lead to expulsion.<\/p>\n

Of course, businesses don\u2019t, and shouldn\u2019t, have the same kind of control over their employees that schools have over students. However, just like schools, they should put ChatGPT content policies in place to decrease the likelihood that an employee will, even unintentionally, put their name to work that belongs to someone else, contributes to misinformation or spreads outright falsehoods.<\/p>\n

No one can tell you exactly what policies to put in place for plagiarism, disinformation or any other workplace issue. Your exact ChatGPT content policy must reflect your business\u2019s culture and needs.<\/strong> But generative AI tools like ChatGPT<\/a> are a new universe.<\/p>\n

In this blog, we\u2019ll share some questions to consider while crafting your ChatGPT content guidelines and policies, which should be included and communicated with your other content policies.<\/p>\n

How Will Your ChatGPT Content Policy Discourage Plagiarism?<\/h2>\n

To reiterate, what your employees learned in school still applies. Putting your name to work you did not create is not OK.<\/p>\n

But in the business world, the stakes are much higher. Plagiarism, copyright and trademark infringement, and other intellectual property laws can have costly results. The challenge today is that the world of ChatGPT has made it even harder to distinguish between truly original content and content that only appears original because ChatGPT rearranged it.<\/p>\n

One policy matter to consider is whether you will require authors to disclose when they have used ChatGPT to help generate content, regardless of the use case.<\/strong> Rather than taking authority from the writer\u2019s voice, doing so establishes trust and provides context for readers. It also provides a backstop in case someone charges you with even inadvertently using straight AI-generated content.<\/p>\n

Such statements may range from a simple \u201cWe used ChatGPT to help create this content\u201d to identifying which sections of a piece of writing received AI assistance. You should also consider whether you will establish limits on how much content AI can inspire \u2014 0 percent? 5 percent? 10 percent? \u2014 and how you will calculate those limits. (Though they aren\u2019t perfect, tools like ZeroGPT<\/a> and GPTZero<\/a> can provide estimates. You may want to explain which tool you will rely on to determine percentages and how you will handle appeals.)<\/p>\n

How Will You Encourage Authenticity in ChatGPT-Generated Content?<\/h2>\n

ChatGPT results, full of generalizations, vague language and \u201cNo duh!\u201d statements, do not engage readers or contribute new knowledge. Plus, as search and ChatGPT continue to evolve<\/a>, it\u2019s likely that writing containing such language will be downgraded in the future.<\/p>\n

Google, for example, rewards content with higher rankings when it displays EEAT, an abbreviation for expertise, experience, authority and trustworthiness. EEAT is the litmus test against which Google judges all content, whether AI-assisted or human-generated. That\u2019s because EEAT measures the value of a piece of content. Content that could come from anywhere or be written by anyone does not have value for Google or readers.<\/p>\n

ChatGPT content policies and guidelines should help content contributors share their expertise, experience, authority and trustworthiness in their work. On the other hand, just as all your writing guidelines should, your ChatGPT policies must establish guardrails to prevent writers from sharing experiences that reflect badly on themselves or the company. Policies can also guide content producers on what kind of authority your writers can legitimately claim or what has the most value for your audiences.<\/strong><\/p>\n

Your ChatGPT content policy should indicate who has the authority to establish, maintain and enforce these guardrails and who has accountability if content goes off track.<\/p>\n

How Will You Determine \u2018Legitimate\u2019 ChatGPT Use Cases?<\/h2>\n

ChatGPT is here, and your content producers are probably already experimenting with it on their own, if not on the job. You shouldn\u2019t try to prevent them from using it. Instead, consider the best use cases for ChatGPT and similar AI tools.<\/p>\n

The biggest and most obvious no-no is generating an AI result, putting your own name to it and publishing it as your creation. Beyond that, what you consider legitimate use cases for producing business content will vary. The two constants: you will always need a human in the loop<\/a>, and using ChatGPT is a question of \u201chow,\u201d not \u201cif.\u201d<\/strong><\/p>\n

For example, AI could take a lot of the busy work out of creating a document, but only if you have the right guardrails in place. In one recent case, an attorney was reprimanded for citing cases in a ChatGPT-generated brief, but the cases were not real. ChatGPT invented them, complete with plaintiffs\u2019 names. So no, you should not use results without further investigation. That\u2019s where a ChatGPT content policy comes in, which might flat-out prohibit using ChatGPT for any legal documents.<\/p>\n

That said, AI can be a great tool for:<\/p>\n