AI in the USA

By Karin McGinnis 

The following excerpt is part of a series of blog posts on topics that will be discussed at the NCBA Privacy and Data Security Section Annual CLE. If you are interested in learning more, then please join us. Register for the program here.

There is a lot of talk, but not a lot of clear law, about artificial intelligence (AI) in the United States. Most resources reflect a common agreement on AI: it is machine based; it is a system; it addresses human objectives; it uses algorithms designed by humans; it makes predictions, recommendations and/or decisions; it is designed to evolve; and while it can do much good, it poses great risks and something should be done about regulating it.

Stakeholders also generally seem to agree on the risks posed by AI. First, the underlying data – both training data and data processed by the AI – may not be accurate. Second, the AI model has to learn to perform its function by processing large volumes of data. Collecting that data can implicate privacy laws (i.e., disclosure and consent), and there are risks for the model if the data set lacks “integrity” (i.e., the data is not sound – garbage in/garbage out). Third, the algorithm could be biased. It is, after all, developed by humans, and humans bring their own presumptions and biases to their work. Fourth, unreliable or biased AI can have serious consequences for individuals, including denial of employment, credit, housing, due process and other rights, including privacy. Consider the now infamous example of Target using AI to determine that a teenage girl was pregnant and sending her coupons in the mail for diapers and other baby items, which were discovered by the teen’s dad. Where AI has been addressed by courts, legislation or federal agencies, the focus has been on balancing these risks against the benefits of AI. Transparency (notice), data integrity, nondiscrimination, validation, impact assessments and continuous monitoring are common themes. The following summarizes some materials reflecting the trajectory of AI regulation in the USA.

Would you like to learn more about issue spotting for privacy considerations when leveraging artificial intelligence? Join us on October 28 for the Annual Privacy and Data Security Section CLE.