
If you search for papers about 'AI readiness', you'll find dozens of articles and papers that invariably say the same thing, but rarely is there any mention of training people in the ethical implications and setting up the mechanism for compliance.
For example, in AI Readiness, Alex Castrrounis explains readiness regarding Technological, Financial and Cultural Issues. Summarizing some of his points:
In an extensive benchmark study by Capgemini Consulting, AI Readiness Benchmark POV (PDF), the term "ethics" appears only once in 30+ pages.
In an article titled Ready, Set, Go! Data Readiness for Artificial Intelligence (AI)! , Kirk Borne suggests that:
Hopefully, I have demonstrated that data readiness is less about the data and more about readiness. Readiness spans multiple layers of operational maturity, including:
- Standardized methods for labeling, validating, cleaning, and organizing (indexing) data across an enterprise;
- A data strategy that establishes guidance for effective data management and data usage;
- Data governance that spans compliance, risk, and regulation related to data (including privacy, security, and access controls);
- Data democratization policies that specify access rights, ‘right to see' authorizations, ethical principles, and acceptable applications for data usage across the organization;
- Open data platform that aggregates data and enables automated data ingest, processing, storage, and workflow orchestration;
- An organizational assessment of technological infrastructure needs; and
- Investment in the infrastructure (eg, cloud, GPUs) to support AI solutions.
All of these articles, and many others like them, make good points. I wouldn't suggest that getting ready for AI should not involve these issues, but there is something glaringly missing. As Paul Virilio famously quipped, "The invention of the ship was also the invention of the shipwreck." Ethics, or trustworthy AI, if you like, require more than a position paper. It's a complicated subject that requires training and monitoring to keep you from foundering on the rocks.
To the extent that AI risks involve people — the social context — we can say that they are ethical risks. In general, ethics refers to standards of morally good or bad, right or wrong conduct. Defining and detecting unethical behavior or processes with AI requires attention to numerous social contexts, including:
If your preparation for AI does not consider these risks, your program will have serious problems. An excellent first step is learning the risks of using prohibited and high-risk data. To understand this, it is helpful to study some well-known failures and what steps could have avoided it. Part and parcel of this are learning to ask the questions to prevent similar failures.
We've talked a lot about 'AI Ethics', but a necessary sub-topic is data ethics. It is not a simple issue as proper use of data is subject to current and historical ethical frameworks, regulators, cultural mores and professional organizations. It goes beyond a document you assemble at the beginning. Ethics evolve and can be addressed differently across jurisdictions.
Ethical issues with personal information and ownership of data are the most important to understand. The simplest way to remember this is to think about the social context. Social context refers to the immediate physical and social setting in which people live or in which something happens or develops. It includes the culture that the individual was educated or lived in and the people and institutions they interact with.
Ethics in AI becomes an issue when the social context is involved. Companies can incorporate guidelines into their AI development and implementation processes to limit unintended consequences arising from AI implementation. Formulating and practicing AI's ethical application requires consideration of some simple guidelines that can be useful for formulating a more detailed policy, such as the following:
Modelers have to learn how to deal with various types of bias:
AI modeling requires a great deal of skill because there is no procedural code to examine. You have to understand the structure of your data and how the algorithms perform.
A futher consideration is the need for mechanisms that explain how the model has arrived at its results, ie model explainability:
Process is essential:
Of course, to be ready for AI, you need infrastructure, skills, data and lots of it, data management skills, management backdrop…all of these things. But if you think about it, it is pretty similar to getting ready for any data technology. However ethics and a program to insure them is strangely missing from the available advice.
There are so many ways to make a mess of things with AI, and there is no code to inspect to see what went wrong. You have to rely on an understanding of the process. Current DIY tools for machine learning pose the ultimate risk — lacking in knowledge of what the model does and no imbued ethical guidelines.