AI training for state and local government agencies

This post was originally written in my role as a contributor for the Thomson Reuters Institute. You can read the original piece here.

The fast-paced development of generative artificial intelligence (A.I.) technology and its increasing presence in everyday work routines poses a challenge for government agencies. Without a proactive approach to embrace this technology, there’s a potential for employees to adopt and utilize it without guidelines. 

A report from Harvard University’s Law Center points out that generative A.I. has the capacity to democratize expertise widely, transitioning us away from a world where law and government services are only understood by subject matter experts. The key question is how government agencies can effectively utilize A.I. to benefit the public while simultaneously protecting the public’s best interests.

Gain an understanding of which tools are in use

At the state, county, and municipal levels, an imperative first step involves identifying the generative tools and systems currently in use. Connecticut state legislation enacted in the summer of 2023 mandates that all generative A.I. and automated decision-making tools be inventoried by the end of this year. State agency A.I. policies and an A.I. Bill of Rights will follow in 2024. 

Municipalities including San Jose, California and Seattle, Washington have implemented regulations to ensure responsible use of algorithmic tools. In San Jose, an algorithmic tool must receive approval from the digital privacy office, while in Seattle, it needs to be approved by the purchasing division. San Jose goes a step further by maintaining a public-facing algorithm register, which provides clear explanations of approved A.I. tools and their use cases in simple language.

Establish shared values around A.I. use

Government agencies face a delicate balance between regulating innovation and fostering progress in government service delivery. The State of Maine drew a hard line with their decision to establish a six-month pause on all generative A.I. use by state agencies earlier this year. A perhaps less punitive approach to generative A.I. adoption involves establishing a set of common core values to guide its use. 

Pennsylvania Governor Josh Shapiro issued an executive order this fall, emphasizing ten fundamental values that should govern the application of this evolving technology in state operations. Broadly, these values seek to ensure that use empowers employees, furthers agency mission and equity, but also protects privacy and security. 

Stress accountability and responsibility for employees using the technology

Municipalities and counties which are implementing A.I. use policies and guidelines are placing a strong emphasis on holding employees accountable for the accuracy of the content they produce, whether with or without the assistance of generative A.I. As Chief Information Officer Santiago Garces shared in reference to the City of Boston’s interim guidelines for use of generative A.I., “technology enables our work, it does not excuse our judgment nor our accountability.” Employees using generative A.I. technology in cities like Boston, Seattle, Tempe (AZ), and San Jose (CA) are required by their respective policies to fact-check the accuracy of generated content and disclose the use of A.I. in content creation. 

Santa Cruz County (CA) has a policy reminding staff to treat A.I. prompts as though they were visible to the public online and subject to public record requests. The City of Boston’s policy stresses the significance of protecting resident and customer privacy by never sharing sensitive or personally identifiable information in prompts.

Expanding access to justice through A.I.

A topic that has sparked debate in recent years is the use of generative A.I. tools for legal interpretation. In a 2022 publication from the Yale University Journal of Law and Technology, the potential benefits and risks of these tools are outlined. The publication highlights the shortage of civil legal aid attorneys and the limitations of pro bono work. It also envisions how non-lawyers can greatly reduce the cost and increase accessibility of legal services. 

Risks to moving too quickly in this direction are inherent bias from existing digital records which A.I. tools rely on, as well as the fact that the ability of generative A.I. to recognize patterns does not necessarily apply to the nuanced judgment needed to inform legal advice. Another publication from Vanderbilt University’s Journal of Entertainment and Technology Law issued earlier this year, suggests that fields of law with more stability, like trust law, may be better initial candidates for early A.I. application. 

One major finding in this report is that generative A.I. use can shift the legal industry away from hourly billing towards flat-fee service provision as document automation is streamlined. This is in alignment with the Thomson Reuters Future of Professionals Report findings, which indicates that less credentialed employees can now complete work with A.I. assistance that previously required credentialed employees at higher hourly rates. 

Provide mechanisms for safe innovation and experimentation

The State of Utah made history this year by becoming the first to launch a legal services innovation sandbox. The Office of Legal Services Innovation oversees non-traditional legal businesses and legal service providers with the aim of ensuring that consumers have access to innovative, affordable, and competitive legal services. The agency actively supports emerging tools and platforms that offer unique and creative approaches of providing legal services, particularly to historically underserved communities. To prevent harm, entities within the sandbox are audited monthly to measure utilization and any potential harm. 

As highlighted in a publication from the Yale University Journal of Law and Technology, a major barrier to collaboration between legal service experts and technologists in advancing A.I. for legal services is the American Bar Association’s restrictions on non-attorneys owning or investing in law firms. While this restriction is intended to safeguard the independent judgment of attorneys, it adversely hampers innovation and collaboration between the legal and technology sectors. 

Innovation and experimentation should, of course, be deployed first in areas where the risk of harm is less. Santa Cruz County’s A.I. usage policy explicitly advises employees against using A.I. tools in critical decisions related to hiring or other sensitive matters where bias could play a negative role. 







Previous
Previous

What does meaningful work look like with AI?

Next
Next

AI Series #3: “Talkin’ ‘bout my gener-a-tive (AI)”