By Brian Clifford
ChatGPT. GitHub Copilot. Bard. Bing. And dozens of other systems. The evolving nature of generative artificial intelligence produces a lot of interest and headlines. And these systems certainly can be fun tools to explore. (Who isn’t at least a little interested in reading a short fan fiction story about their favorite musician, sports star, or celebrity chef generated entirely by an online chatbot?)
But are these types of systems appropriate for use in your control systems integration business?
To answer that question, there are several concepts you need to address — including:
- Confidentiality: When using prompt engineering, you must ensure that you have permission to input the data you are using with the AI system. Does your client agreement prohibit disclosures of project-related information to third parties? Even with respect to your own data, do you understand how it may be used by the AI host — including for purposes unrelated to your request (and perhaps even to help one of your competitors)?
- Intellectual property: In March of this year, the U.S. Copyright Office issued a policy statement confirming that copyright protection is limited to human authorship. Have you thought about the value of the AI output if you cannot obtain protection against copying and third-party use? Would such (lack of) intellectual property interests allow you to comply with your client agreement? And could using AI-generated code in your deliverables “infect” your other intellectual property rights, causing your entire code catalog to become “open source?” On the other side of the equation, are you sure that the AI system is properly authorized to use the data set it relies upon?
- Output quality: It is well-known that the output from artificial intelligence systems can include “hallucinations” – information that sounds correct on its face but is, in fact, not justified by the underlying data. In addition, the training data used to generate the AI system may have errors — and, by definition, will have a “lag” with respect to current best practices generated in the non-AI world. ChatGPT was described by one academic commentator as an “eager-to-please intern who sometimes lies to you.” Do you want to risk that your client deliverables could be described in such a way?
The automation industry has always been on the cutting-edge of technological developments, and its members often are early adopters of new applications for advanced systems.
But being aware of the limitations – and potential pitfalls – in using generative artificial intelligence in connection with your projects can help you manage your risks in this exciting and growing area.
The Faegre Drinker industrial automation, system integration and robotics team (which provides all CSIA members the benefits of the CSIA Legal Plan) is available to help you decide upon the right dispute resolution forum for your projects and negotiate contract terms to implement such decisions.
Brian Clifford is a partner in the industrial automation, system integration and robotics practice at Faegre Drinker. He can be reached at CSIALegal@faegredrinker.com.