Business applications and operations are integrating artificial intelligence more rapidly than any similar technology. A few examples include email, document creation, notetaking, data summarization and analysis, design and engineering, finance and accounting, human resources, marketing and communications, and IT. Businesses need to rapidly develop and execute plans for the structured implementation of AI. The following are 10 key features of such a plan.
- Governance. Form a team of individuals to manage and make decisions about AI implementation. The team should include personnel with management, operations, finance, and IT responsibilities, and an attorney with cybersecurity, privacy, and AI expertise.
- Policy. Adopt an AI use policy to address existing and foreseeable business and legal issues. Amend that policy throughout the AI implementation process to reflect decisions made about those issues and the operational uses of AI.
- Existing and Potential Uses. Identify existing uses of AI, and additional use cases for it. Examples include stand-alone generative AI (such as for creating content like text, audio, video, and photos), as well as AI integrated into other applications (such as customer relationship management apps, HR platforms, and IT ticketing systems).
- Control and Ownership. License AI to ensure ownership and confidentiality of data inputs and AI outputs, and to control data used to train AI. Enter into agreements with AI developers to secure rights and allocate obligations and liabilities.
- Training and User Group. Select groups of users to test AI. Train them about the AI use policy, how to use the AI, and the business goals for testing, prototyping, and AI use.
- Testing and Prototyping. Select non-production and non-customer data and uses to test AI. Ensure human control of data input integrity as well as legitimacy and reliability of AI outputs. Once testing yields those outputs, identify limited production and customer use cases appropriate for prototyping, ensuring transparency with and consent from customers. Once prototyping yields legitimate and reliable results, deploy approved AI more broadly in production.
- Recordkeeping and Auditing. Audit AI use in production to ensure and maintain records of continuous data input integrity and output legitimacy and reliability. Adjust the AI use policy and practices to address operational, legal, and other issues that may arise.
- Assessment Restricted Uses. Conduct risk assessments for “restricted” uses of AI. That includes use of AI to process sensitive personal information (such as health and biometric information, data about children, race, religion, political affiliation, and other protected characteristics, and governmental identification and financial account numbers), and use of AI that poses either a risk to humans (such as HR functions, admissions decisions, and consumer profiling) or to systems or security (such as IT, infrastructure controls, and surveillance).
- Contracting and Transparency. Ensure that consumers and business customers are aware of the use of AI through a privacy policy, terms of use, and contracts. Contracts with consumers and customers should include consent, AI use standards, and liability allocation.
- Management. Authorize personnel to manage ongoing and evolving uses of AI. Ensure that other AI risks are addressed, such as contracts with vendors that use AI for the business as well as cyber, errors & omissions, and professional liability insurance coverage for AI use.
AI is a powerful technology that has the capacity to create both tremendous opportunity and risk. Businesses should rapidly implement this technology to secure competitive advantages, doing so based on a structured plan to manage and mitigate technology, business, and legal risk.
Cam Shilling founded and chairs McLane Middleton’s Cybersecurity and Privacy Group. The group of six attorneys and one paralegal assist businesses and private clients to improve their security, privacy and AI compliance, and address any incidents or breaches that occur.