Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As WA government officials embrace AI, policies are still catching up

Local governments are still figuring out best practices for using AI in their daily work. A public records request led to the uncovering of dozens of ChatGPT logs from city government employees in Everett and Bellingham.
Illustration by Genna Martin
/
Cascade PBS
Local governments are still figuring out best practices for using AI in their daily work. A public records request led to the uncovering of dozens of ChatGPT logs from city government employees in Everett and Bellingham.

This is the second in a two-part series about how local governments in Washington state are using artificial intelligence. Read part one here

In February, heavy snowfall swept through Bellingham. Bre Garcia, a recent college graduate, began her commute to her job at a printing company, but quickly turned around. Her street was covered in snow and she didn’t feel safe.

Garcia had assumed the city would prioritize major arterials, like the one she lives on, for plowing. She saw private plows clearing commercial parking lots, but no city plows clearing streets. Concerned about dangerous streets, she reached out to the city.

“It did not look nor feel good as I tested my car and potentially my life against unprepared drivers on unprepared roads,” Garcia wrote in an email to Bellingham’s Public Works department. “Hope this complaint reaches the right place, and that plowing becomes more of a priority for the actual streets that need it.”

A city communications official replied that evening, telling Garcia that crews had made multiple trips to her street: “We recognize that conditions may not have been ideal when you needed to travel, and we regret any difficulties this caused.”

Garcia found the response a little dismissive. It was like “they didn’t read my email at all,” she said.

If the response seemed impersonal, that might be because almost none of it was written by a human. Records show that it came from ChatGPT: A city staffer copied Garcia’s email into the chatbot and asked it to generate a response “acknowledging the concerns but letting them know we did have crews out plowing.” Only four words were added to the chatbot’s output.

It wasn’t just Garcia’s email that was treated this way. Records show a Bellingham staffer asking ChatGPT to write responses to emails about parking, traffic, a homeless camp and more. The earliest example is from more than a year ago.

Responding to constituent emails appears to be a “high-use case for how city employees are using AI tools,” said Bellingham Mayor Kim Lund.

“We consider that a permissive use of AI for efficiency reasons, but it’s a springboard to finalize the communication with the constituent,” she said. “There’s still that discernment that happens and that critical thinking.”

Cascade PBS and KNKX sent Garcia a screenshot of her email appearing in the city official’s ChatGPT history, which was obtained through a public records request. Garcia wasn’t happy to see it.

“I was totally dismissed,” she said.

The snowplow complaint was fairly minor, but it still had safety implications, Garcia said. If the government can’t be trusted to respond to something like that directly, she added, how can they be trusted to deal with bigger issues?

“It makes me wonder if anybody actually sat down to seriously think about what I was saying,” she said.

Rampant AI use, few guardrails

Bellingham is still developing a formal AI policy; asked if the public should have the right to know a message is AI-generated, Lund wasn’t sure. It’s a question the city is grappling with.

That lack of clarity isn’t unusual: As generative AI tools like ChatGPT are used more and more by local governments, adoption is often outpacing potential safeguards and ethical guardrails.

Development of government AI policies at the state and local level is largely led by IT departments. In a recent nationwide survey of 300 state and local government IT directors, nearly 80% said they were concerned about a lack of clear regulations on AI use.

Lund said she sees AI as a way to make government work more efficiently, but also “There’s an abundant need for caution and understanding the implications of these tools.

“I think that’s above any one city’s IT department or policies to really grapple with,” Lund said. “Because there are weighty, weighty factors to consider.”

City officials in Washington have used generative AI for a variety of policy and communications tasks, according to thousands of pages of ChatGPT histories obtained by Cascade PBS and KNKX via records requests.

As noted in part one of this series, Bellingham and Everett are the main focus of this story not because they’re outliers, but because they were the fastest and most comprehensive in their responses. Other cities, like Seattle, are slowly responding to the request in installments.

While the Washington state government has issued guidance on AI use for its public employees, policy adoption by local governments “is a bit more varied,” said Jai Jaisimha, a Seattle tech entrepreneur and co-founder of the Transparency Coalition, a national organization that advocates for regulating AI.

“There’s quite a bit of work to do,” Jaisimha said.

Washington’s IT department released interim guidelines for the “purposeful and responsible” use of generative AI by state employees in 2023.

The guidelines say humans need to review all AI content for bias and accuracy, and that people should avoid putting confidential data into AI chatbots, or using AI to draft communication materials on “sensitive topics that require a human touch.”

The guidelines also say all AI-generated content used in government business should be labeled to allow for “transparent authorship and responsible content evaluation.” AI-generated content should include a disclosure about the model of chatbot used, the exact language of the prompt, and the name of the human who reviewed the AI output.

State guidance on labeling AI-generated documents published publicly does not appear to be widely followed.

Chatbots are notorious for making things up, so accuracy is a major concern when they’re used by governments. The records reviewed for this story contained numerous examples of ChatGPT introducing errors into drafts of official documents, but most were corrected, suggesting that human review is happening.

Compliance with security guidelines is less standardized: Some of the ChatGPT histories obtained by Cascade PBS and KNKX were redacted by records officers because the user had introduced confidential information into the chatbot. Many records didn’t meet the legal threshold for redaction, but still contained sensitive personal information not intended for public consumption.

The focus on security, accuracy and transparency are common in government AI policies. Cities that have already adopted AI policies, including Spokane and Seattle, have similar core principles.

“There’s some core things that, regardless of the use case, you want to look at when you’re deploying AI at scale in a manner that affects people’s lives,” said Yuki Ishizuka, a policy analyst with the state attorney general’s office who sits on Washington’s AI Task Force.

The Task Force, created by former Washington Gov. Jay Inslee in 2024, comprises elected officials, business stakeholders, tribal leaders and representatives of advocacy organizations, and is tasked with delivering recommendations on the use of AI in the private and public sectors.

The task force has made one official recommendation so far: that lawmakers strengthen the language in Washington law against AI-generated child sexual abuse material. The next preliminary report is due in September, and a final report is due in July 2026. Both will have recommendations on local government and AI, Ishizuka said.

“Technology moves very fast, law and regulation tends to move slowly,” Ishizuka said. “And it’s often not until after a lot of technology and the impacts are felt that often regulators respond.”

A ‘permissive approach’

Everett’s IT department sent employees a set of provisional guidelines on AI usage in June 2024, and is now in the process of developing a more robust AI policy. The city is taking a “very cautious approach,” said IT director Chris Fadden. Everett is modeling its policy on a template created by the GovAI Coalition, a group of cities around the United States collaborating on issues related to AI in government.

City employee unions offered feedback on Everett’s draft AI policy in July. It’s in the final review stage, with the mayor’s final approval expected in the next couple weeks, said Simone Tarver, Everett’s communications manager.

Going forward, staff in Everett will be instructed to use only Microsoft Copilot, which offers a version of its model for government entities. Use of ChatGPT and other chatbots will be allowed only with a special exemption, Fadden said.

“There’s a lot of safeguards in the Microsoft product versus ChatGPT,” he said.

Copilot is more secure, the IT director said, and will allow better integration with existing city datasets and Microsoft Office products the city uses. The government system is hosted on servers in the United States and has other safeguards to meet public agency security requirements. Copilot’s base chatbot is based on a large language model developed by OpenAI, the company that created ChatGPT. Microsoft warns users that it can make mistakes, and recommends users avoid bringing it “high stakes” scenarios with legal, regulatory or compliance implications.

Don Burdick, Bellingham’s IT director, said Bellingham has taken a “permissive approach” to letting staff use AI, giving them broad flexibility about how and with what tools they use it. The city is also a Microsoft client. Staff are now being encouraged to use Copilot but they can still use ChatGPT or other chatbots if they want.

“The industry is evolving way too fast,” Burdick said. “Keeping that sort of grip on things is not productive.”

Burdick said staff have been encouraged to be curious and explore generative AI, while remaining “skeptical and humble” and “not completely relying on it.”

The two cities differ on whether disclosure should be required.

If generative AI is being used for “more than mere language refinement,” Everett’s new policy will recommend citing its use.

But it’s unclear if Bellingham’s new policy will require such citations. If employees are reviewing and evaluating a chatbot’s output before sending it, they’re assuming responsibility for the content, and there’s no need to disclose that it was generated with AI, Burdick said.

Melissa Morin, Bellingham’s communications director, said department heads are continuing to discuss edits to the draft AI policy, and that the city expects to adopt it before the end of the year.

(Cascade PBS adopted an AI policy in July. It requires human review of AI output, and disclosure if AI tools are used in content creation or decision-making. KNKX is currently working on an AI policy. No parts of this story were written using generative AI.)

‘Generate a picture of me looking awesome’

While city officials are still grappling with whether to allow some types of AI usage, they agree that a few specific uses should be fully off-limits: No using AI for screening job applicants, social scoring or fully-autonomous decision-making. Fadden, the city’s IT director, said Everett’s new AI policy will also include a provision that AI not be used for autonomous weapons systems: No killer drones.

Morin said the city is skeptical of using AI for image generation. When staffers recently suggested using graphic design tools with built-in image generation, she recalls saying: “Don’t even go there, it’s just too problematic.”

“That’s one where I feel as a communications director way less comfortable putting something out to the public without disclosing that it was generated by AI,” Morin said. “Because it’s not something we check or manipulate in the same way we do when we’re using it for written text or website copy.”

Records show city staff frequently asking ChatGPT to generate images, including city logos, promotional photos of police dogs and graphics advertising special events. But most staffers seemed simply to be experimenting; no instances of AI-generated images published on city websites or other communications could be confirmed.

Many were just having fun.

“Generate a picture of me looking awesome,” one Everett staffer told ChatGPT.

Over several months, one Bellingham official used ChatGPT to generate dozens of minion memes — until the chatbot began refusing the requests, apparently because of an update to its content policies. The staffer tried getting around the restriction by asking for humanoid banana memes; the chatbot refused.

Many records show staff asking ChatGPT policy questions, though it isn’t always clear if they’re relying on the chatbot’s advice or simply testing its capabilities.

In Everett, a city staffer asked ChatGPT what the best location would be for Everett’s light-rail station. (The chatbot suggested three options: somewhere along Colby Avenue, in the vicinity of the Everett Mall or near the Everett Station.)

Everett Mayor Cassie Franklin doesn’t think AI should be used to make actual policy decisions like that. “I don’t trust a computer to develop that information for us,” she said.

Sensitive data 

Some elected officials remain wary of generative AI.

Bellingham City Councilmember Jace Cotton said he’s “not a big ChatGPT user.” Before being elected, as a campaign worker, he briefly experimented with it for data analysis, but said he hasn’t used it for his current job.

“I worry sometimes about the privacy implications of feeding city information into algorithms that are black boxes,” Cotton said.

Cotton’s concern is a major one for governments using AI: ChatGPT is not a secure platform, which could put cybersecurity and constituents’ privacy at risk.

After it’s introduced into a chatbot, data often “gets ingested and consumed and resides in the AI,” said Ishizuka of the state’s AI task force. Washington’s AI guidance for state agencies warns that entering non-public data into systems like ChatGPT could lead to “unauthorized disclosures, legal liabilities, and other consequences.”

In Bellingham, records show that one IT staffer gave ChatGPT a copy of the city’s GIS computer code for tracking homeless encampments, and asked for help debugging it. The record of this request had to be heavily redacted, because it contained information that could create cybersecurity vulnerabilities for the city.

Several other chat logs were redacted because they contained personal information “in files maintained for employees, appointees, or elected officials, to the extent that disclosure would violate their right to privacy.” Some Everett police records were redacted because they contained performance review information. One Bellingham police detective’s chat history was redacted because it contained information about an open case.

In an emailed follow-up statement, Burdick, Bellingham’s IT director, said the city’s training makes it clear that staff should not input confidential information into generative AI tools.

“Training staff to use generative AI tools well will take time, and we are already learning about how to better clarify our expectations about its use to staff,” Burdick said.

This spring, Bellingham officials emailed a copy of the city’s draft AI policy to city employee union leaders. Records show that the policy was drafted with assistance from ChatGPT. Over the course of more than a year, an IT staffer in Burdick’s department asked the chatbot numerous AI policy questions.

“[C]reate an example ai policy for a city government,” the staffer wrote in one prompt.

Seven of the 10 guiding principles in the policy procedure document appear partially copied from ChatGPT’s output. The main difference is that the city version replaces the phrase “AI system” with “solution.”

Burdick said ChatGPT’s outputs were vetted by staff when putting together the AI policy.

“There was a lot of back-and-forth in that process, and that’s totally what we’re encouraging staff to do with the tools,” Burdick said. “There’s nothing wrong with asking AI: ‘Build that process for me, talk me through that process’ ... It doesn’t mean we’re going to take it carte blanche for what it is, but we worked with it to build the vision that we saw.”

‘Consequential decisions’

One common use of AI appears to be generating application materials for state and federal funding opportunities — a practice that could have real-life ramifications.

Cities spend a lot of time applying for grants. The applications are often similar, Everett Mayor Franklin said, and AI can be helpful in reducing staff work time to prioritize “other important work, engaging with the community.”

But Jaisimha of the Transparency Institute said caution is required when using AI for grant applications. It’s an example of a “consequential decision,” he said, with large amounts of public money on the line.

“You hope that the particular organizations that are requesting grant applications have requirements that people at least disclose the use of AI,” Jaisimha said. “That is another really key form of transparency.”

In several instances, a planner in Bellingham used ChatGPT for help requesting state funding for cyclist and pedestrian safety projects.

“[I] need help answering the following question about the project — Does this project benefit underrepresented communities? If so, please describe,” the user wrote. “Look on the City of Bellingham website for project information and give me a response.”

Emily Bender, a linguist at the University of Washington and prominent AI critic, was concerned to hear that. Racial equity narratives are important in grant applications “because we want everybody thinking about these issues,” she said, and using a chatbot defeats that purpose.

Similarly, a staffer in Everett used ChatGPT to fill out an application for $7 million in affordable-housing funding from the U.S. Department of Housing and Urban Development. The employee uploaded several city documents about the project, and told ChatGPT to use the information to generate answers to some of the application questions, a racial equity narrative and other large sections of the document.

The staffer also used ChatGPT to generate more than 20 individualized letters of support from local elected leaders and organizations, including the Everett Housing Authority, the Snohomish County Treasurer’s Office, U.S. Senators Patty Murray and Maria Cantwell and several state representatives. The staffer also asked ChatGPT to generate an email template to send to those entities, asking if they’d be comfortable signing the support letter.

The city’s final grant application included 23 distinct letters of support. Almost all were identical to the ones generated by ChatGPT.

Franklin said she wasn’t aware of any HUD policies related to the use of AI-generated grant application materials. HUD didn’t respond to a request for comment.

A few federal agencies are cracking down on the use of generative AI in grant applications. In July, the National Institutes of Health said it had been receiving an uptick in AI-generated grant applications that had the potential to overwhelm the grant-review system and harm the “fairness and originality” of the process.

Going forward, the agency said it won’t consider applications that are “either substantially developed by AI, or contain sections substantially developed by AI.” If AI is detected after a grant is awarded, the NIH could suspend funding.

Text without accountability

Can generative AI tools be used ethically in government? People tracking development of the technology disagree. Jaisimha of the Transparency Coalition thinks it’s possible, but only with significant guardrails for security and transparency, and models that are “sandboxed appropriately” to address specific tasks.

Bender at the University of Washington disagrees. She disapproves of the term “AI,” because, she said, it implies an intelligence that isn’t actually there. Large-language models like ChatGPT — she calls them “synthetic text-extruding machines” — are trained on massive volumes of writing by humans, much of it copyrighted. They generate text by predicting what word is most likely to come next in a sentence.

“That is not text that anybody has accountability for, it’s not text with any communicative intent,” Bender said.

Bender said she can empathize with civil servants feeling overworked, but she thinks the solution is for governments to invest in more staff and resources, not unaccountable text machines that frequently make up “facts.”

“What happens over time as people lose confidence in their own ability to write these kind of letters without this assistance?” Bender asked. “What does that do to their expertise?”

What’s next?

The mayors of Bellingham and Everett say AI is here to stay.

In Everett, a “select group of staff members” from various departments are now meeting weekly for in-person AI training and discussion, said Tarver, the city’s communications manager. The plan is for those “AI champions” to act as the “first line of support for AI-related topics” and share what they’ve learned in their respective departments.

This peer-learning approach “creates a safe environment for discussing cultural shifts and addressing any apprehension, ensuring a smoother transition to AI integration,” Tarver said.

A draft of Bellingham’s AI policy said the city “encourages the use of AI to enhance public services, improve operational efficiency, and foster innovation while maintaining transparency, accountability, and respect for citizens’ privacy and rights.” The line was copied almost verbatim from ChatGPT.

Burdick, Bellingham’s IT director, said citizens can expect to see more AI use in city services.

Citizens email the city with a lot of questions that could have easily been Googled, he said, and using AI to answer those queries can free staff time to work on more important projects. At some point in the future, he thinks the city will have a chatbot on its website for that purpose.

Bellingham Mayor Lund still has concerns about AI — namely the climate impact of large language models, which rely on massive data centers that consume vast quantities of water and energy. Still, she thinks the technology will become increasingly necessary.

Franklin agreed. “Absolutely, this is going to be part of our future,” she said.

Some researchers pushed back on the idea that widespread AI usage is inevitable.

“I see that narrative as basically a bid on the part of big tech to steal the agency of everybody else, because the future is not written, and we can choose not to do these things,” said Bender, the University of Washington professor.

Garcia, the Bellingham resident who emailed the city about snowplows, said she’s worried about the impact of AI on the environment and, more broadly, on human connection. She graduated from college in 2023, just as the technology was starting to cast its shadow over higher education and the entry-level job market. She noted that if a student had used AI the way the city did, they’d probably be in serious trouble for plagiarism.

Garcia works in customer service, and said she takes pride in being able to connect with a customer — person to person — to help them solve a problem.

“I don’t know why I would replace that,” she said.

All stories produced by Murrow Local News fellows can be republished by other organizations for free under a Creative Commons license. Image rights may vary. Contact editor@knkx.org for image use requests.

Nate Sanford is a reporter for KNKX and Cascade PBS. A Murrow News fellow, he covers policy and political power dynamics with an emphasis on the issues facing young adults in Washington. Get in touch at nsanford@knkx.org.