{"id":45048,"date":"2023-07-21T07:07:23","date_gmt":"2023-07-21T11:07:23","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=45048"},"modified":"2023-07-20T13:09:51","modified_gmt":"2023-07-20T17:09:51","slug":"ai-and-security-is-your-organization-ready","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/ai-and-security-is-your-organization-ready\/","title":{"rendered":"AI and Security: Is Your Organization Ready?"},"content":{"rendered":"
In late May, an image showing thick, black billows of smoke rising from the headquarters of the U.S. armed forces building near the Pentagon popped up on a prominent social media platform.<\/p>\n
The photos were determined to be a false report of an explosion near the federal building. Local and national officials quickly refuted the claim, but the post was still shared nationally and internationally in investment circles causing the S&P 500 to drop, albeit briefly, before a rebound. The image, and other similar images with claims of a White House explosion, were likely created using generative AI.<\/strong><\/p>\n Only days later in an open letter<\/a> signed by more than 350 AI experts and public figures, industry leaders warned that \u201cmitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.\u201d<\/p>\n None of this is meant to scare business leaders, but to illustrate a few key points:<\/p>\n Business leaders should ask themselves, \u201cIs my organization ready, and if not, how can I prepare?\u201d<\/p>\n A generative AI platform, Writer, recently revealed<\/a> nearly half of senior executives believe corporate data has been unintentionally shared with ChatGPT, the most widely used generative AI platform among enterprises. These concerns aren\u2019t baseless.<\/p>\n In fact, cybersecurity veteran David Lefever, founder, principal and CEO of The Mako Group and one of Centric Consulting\u2019s business partners, has found that today, many business leaders are concerned with an increasing number of threats.<\/p>\n Among those is \u201cleaky data,\u201d or the unintentional sharing of information with a third-party system without proper documentation and authorization.<\/strong> This can lead to privacy breaches, invalid and unreliable information, accidental security risks, and other threats.<\/p>\n At a minimum, all AI security plans should include:<\/p>\n The best ways to prepare for the security implications of artificial intelligence are to educate, create governance, remain vigilant and plan for recovery in case of a breach:<\/p>\n With any risk or vulnerability, there\u2019s a software component and a people component. To successfully leverage AI<\/a> in a secure manner, your organization will have to address both, starting with creating awareness and providing comprehensive and ongoing training for the workforce.<\/p>\n \u201cAI can create such convincing content to the average person that it\u2019s going to be difficult for them to discern what\u2019s real without intensive training,\u201d Lefever said. \u201cSocial engineering approaches will become much more sophisticated and convincing, and it will require teaching the workforce to be critical thinkers around security.\u201d<\/p>\n Communicating policies, guidelines, best practices and updates to these living documents will be critical in creating and maintaining a security mindset.<\/strong><\/p>\n A key part to creating a security mindset is establishing a governance plan that promotes the responsible and ethical use of AI tools while helping ensure compliance and managing risk in a continually evolving landscape.<\/p>\n Every company and employee has a certain level of responsibility when they begin interacting with AI tools. Organizations should promote innovation while ensuring secure collaboration.<\/strong><\/p>\n In cybersecurity, this is often known as a \u201csandbox,\u201d or a place to execute ideas separate from network resources production systems and infrastructure that could otherwise be impacted. These testing environments can also be used to test solutions or custom code before deploying it to a broader audience.<\/p>\n Companies should not only provide a safe place to explore these tools and their capabilities, but they should also make sure employees know about the space and encourage them to make use of it.<\/p>\n As AI usage climbs, companies should frequently conduct penetration testing. Leaders must also provide their teams with tools to help determine human vs. AI-generated content.<\/p>\n Keep close tabs on your technology investments and build in tools that screen and flag AI-generated content<\/a> in communication such as emails, which are used to create phishing and other scams. Other new and evolving technology can help teams better catch and guard against malware and bad code.<\/p>\n It\u2019s also important to reevaluate current technology you\u2019re using to ensure it can support the sophistication AI brings.<\/p>\n\n
AI Security Considerations for Enterprises<\/h2>\n
\n
How to Prepare for AI Security Impacts Now<\/h2>\n
Create Security Awareness Across Your Organization<\/h3>\n
Set Up AI Guardrails and Governance<\/h3>\n
\n
Provide a Safe Environment for Your Team to Be Innovative<\/h3>\n
Continually Perform AI Risk Assessments<\/h3>\n
Create an AI Incident and Disaster Recovery Response Plan<\/h3>\n