THANK YOU FOR SUBSCRIBING
Using Artificial Intelligence-The Good, The Challenges, And The Right Way
Debbie Evans, Group Data Protection Officer, Rentokil Initial
The ability for machines to learn and think for themselves is science fact, not science fiction. Whilst for technologists this isn’t new news and provides huge opportunities for business development, for the public, it can seem a scary prospect. Science fiction movies haven’t helped with the doom and gloom of computers taking over the world. CIOs need to be mindful of the concerns people have when it comes to technology use and demonstrate that technology is not something to be feared but can be beneficial and trusted if used responsibly.
Balancing business benefits with responsible data use can be a challenge; but the implementation of appropriate controls when using Artificial Intelligence is the right thing to do. There’s no doubting that the benefits of accessing huge volumes of data and being able to use machines to predict behaviour and outcomes can have a profound positive effect, from helping with the management of the COVID-19 response to the diagnosis of illnesses, improvements with innovation and productivity. However, the trustworthiness of AI systems is a key factor of adoption, and the consideration of the human impact is essential. Globally laws and guidelines may apply to businesses where Artificial Intelligence tools are used.
The General Data Protection Regulation and equivalent data protection laws include controls and safeguards that can be applied within AI systems where personal data is used. The concept of privacy by design and default is a requirement for many existing and evolving data protection laws globally. This means that data protection issues should be incorporated into the design and proposed use of any systems using personal data, even the most complex AI systems. Businesses need to ensure that any personal data being processed is used fairly, lawfully, and transparently; and individuals whose personal data is being processed should be able to easily understand how their data is being used and know that their data is secure.
The challenge with AI systems is that it can be difficult to explain how it works in a way that is clear to individuals due to the use of algorithms and the way patterns are formulated via analysis of large volumes of data. It is equally difficult to be confident that any outputs from AI systems are unbiased, ethically sound, and error free. Individuals may not always be aware of how their personal data is sourced when handled by AI systems, as whilst some data may be consciously provided, other data may be inferred based on probabilities.
Establishing business controls and processes that align with data protection requirements safeguards businesses and ensures that a humancentric approach to technology use applies. The use of technology should always take into account the human element—without the ability to consider the impact to the individual, the benefit of technology is at risk. Whilst machines are able to outperform us when it comes to calculation capacity, they don’t necessarily have the ability to understand the impact to an individual’s thoughts or feelings if data is used unethically. Human intervention at various stages of the process is required to ensure the way of using data is fair.
The following is a checklist of controls that business should consider when looking to procure any AI systems:
• Complete a Supplier Assessment - Review the AI system supplier’s credentials— do they operate in a transparent manner? Are their security and data protection practices aligned with best practices? Complete a comprehensive assessment of their technical and organisational security and data protection controls.
• Project Governance - Ensure privacy by design and default principles are incorporated into the AI system project development and deployment. Are the privacy by design principles clearly defined and reviewed at the various stages of the
system use? For example, are there clear accountability owners of the data? Is the data processed kept to a minimum that is necessary?
• Data Protection Impact Assessments - Is there a record that demonstrates that the data processing risks have been assessed and balanced against the necessity and proportionality of the activity?
• Creation of Records of Processing - Data protection laws such as GDPR require that businesses maintain records of personal data processing activities, that is, a data inventory or record of processing. Prior to processing any personal data, whether in a trial environment or real, the business should ensure that a record of processing is created in the prescribed legal format, which requires, amongst other things, a description of the purposes of the processing, the type of data and the lawful basis. Care needs to be taken to ensure the appropriate lawful basis is applied, and if consent is required, can the business evidence that it has been obtained?
• Privacy Notices - may require the creation and/or updates to ensure that people are properly informed about how their data will be processed, how the technology works, and will be used.
• Ethical Principles-Businesses should ensure that the use of AI systems is aligned with achievable business ethical principles, for example, principles of transparency (explainability), accountability, robustness (security & safety), and human-centred values &fairness.
• Auditable Machine Learning Algorithms - is the business able to explain the rationale behind the algorithmic decisions and check for bias, discrimination, and errors? What assurances can be provided to individuals?
• Appropriate Contract TermsWhether system supplier or customer contracts, there needs to be clarity in relation to data processing responsibilities and risks. Reviewing your business adoption of AI systems as per the above checklist will provide demonstrable evidence that your business and use of AI systems can be trusted. A customer and humancentred approach is the responsible and right thing to do.