The feds get 4,000 website complaints a day. Can a “responsible” AI chatbot untangle the mess?

At the Ottawa Responsible AI Summit, experts discussed security, fairness and who gets a seat at the table.

According to Michael Carlin, acting director of policy at Canadian Digital Service (CDS), the Canadian government receives up to 4,000 complaints about its website per day. Could an artificial intelligence (AI) chatbot make browsing 10 million government web pages less of a headache?

“The data set you collect now could become a weapon in the not-too-distant future.”

Michael Carlin, Canadian Digital Service

Carlin's team is working to find out while ensuring the tool is safe, reliable, and fair. He explained the development process for the tool, which is still in beta testing, at the inaugural Responsible AI Summit in Ottawa on Wednesday afternoon. The event brought scientists, entrepreneurs and government leaders to Bayview Yards to discuss the literal and figurative power of artificial intelligence, trust and who gets to define its limits.

Powered by OpenAI's GPT-4 model (Carlin said Cohere Canadian made model. didn't work), the Canadian government's AI chatbot prototype allows users to ask questions in plain language and get relevant information from Canadian government websites, and be redirected to the appropriate web page with the caveat that the user must check the answer generated by the AI. Currently, these kinds of requests put a strain on government call centers and personal accounts, which may also be less accessible to people with “complex needs,” Carlin said.

Michael Carlin

The security and fairness considerations driving the development of this chatbot reflected much of the wider discussions at the Ottawa Summit on Responsible AI, which focused on how data privacy can be securely ensured in the age of AI and how AI tools can be used fairly.

“Responsible AI is not just about managing risk, it is about ensuring that the benefits of AI reach everyone,” Kanata-Carleton MP Jenna Sudds said in her opening remarks. “And that our system reflects Canada's diversity, our values, our social strengths.”

CONNECTED: Feds Sign New Agreement with Cohere to Study AI Use in Government Services

An account will not be required to access the chatbot, and the tool will not accept any personal information included in the request, such as a social security number or phone number. Carlin said this was a “design choice” and that the tool is intended to allow anonymous, de-identified queries to the government until users are ready to identify themselves, such as through the immigration application they were searching for using the tool.

“If you don't need personal information, don't collect personal information,” Carlin said in his presentation.

On the panel, Carlin explained that his team could have collected more data on equity factors such as testers' gender and race, but they didn't want to have “very large data sets piled up.”

“The data set you collect now could become a weapon in the not-too-distant future,” Carlin said.

A scalpel, not a chainsaw

In addition to the potential for abuse, unfair AI tools may also exacerbate existing social biases. Hammed Afenifere, co-founder and CEO of Oneremit, explained in a panel discussion and an interview with BetaKit that AI models are not made for everyone.

According to Afenifere, training data can make an AI chatbot inherently biased, for example by providing an entrepreneur with market data for Western countries such as the US, Canada and UK rather than Africa, simply because it has less data to work with. Other panelists compared his example to some automatic soap dispensers that cannot detect yesrk skin.

“If we create responsible AI where it has that context, or [understands] How do Africans generally work, you can bring more money into this country,” Afenifere said.

In his presentation, Carlin explained that the CDS team is working to ensure that diverse populations using the chatbot receive answers for their demographic, such as information about programs for Black business owners. However, they should also ensure that the chatbot’s responses are not negative in nature.

“This is a scalpel, not a chainsaw-based process,” Carlin said. The CDS team is going to consult with various communities to get a better understanding of how they interact with government services so that the chatbot can be tested through that lens. A transgender person may interact with the government to obtain a specific, discrete set of services that are unique to them, Carlin said.

“We want to make sure that the test questions that we're going to use are created by that community, so that we're not just making it up if we don't have a transgender person on our development team,” Carlin said.

Who defines “responsible” AI?

The “scalpel, not chainsaw” approach may have answered one of the summit's understated themes: who decides what responsible AI is? The phrase “sitting at a desk” was used throughout the day. Speakers discussed who was sitting at a given table or emphasized the importance of making sure everyone was represented.

“Imagine a future shaped by AI, shaped by community… and built with all of us at the table,” said Somto Mbelu, founder and program manager of the Ottawa Responsible AI Hub, in his opening remarks.

“Imagine a future powered by AI, built for community… and built with all of us at the table.”

Somto Mbelu

In one of the first lectures, Carleton University public policy professor Graham Old said that formalizing industry standards, for example in the field of artificial intelligence, is not a simple process. He also asked who would sit at these tables.

Afenifere told BetaKit after the summit that he was happy to help people look at responsible AI from a different perspective, but he too was concerned about who would get the seat.

CONNECTED: Is SaaS dead? Technology leaders debate whether it's time for a requiem or a renaissance

“For me, I'm still a little confused: who is responsible for this? Who is 'we'?” Afenifer said. He suggested that in the future there will be some kind of committee or government organization responsible for implementing responsible AI policies.

“This conversation is still ongoing,” Afenifere added.

Carlin's approach of simply providing the proverbial spreadsheet to the communities that will use the government's AI chatbot is perhaps more pragmatic than forming committees or organizations. The approach reflects the project's “organic” growth, which began with a proof of concept created by one person on a Friday afternoon, Carlin said.

CDS takes a “bubble” approach to peer consultation, starting with communities within the Government of Canada itself, such as employees who identify as Black or LGBTQ+, and then moving to broader community-based testing. Carlin acknowledged that Indigenous communities have “very diverse” views on AI and there will not be a monolithic approach to public debate.

“I don’t want to internally prescribe a perfect path forward if it’s going to be different from community to community,” Carlin told BetaKit.

A government website chatbot just completed a test on 2,700 random users with a success rate of about 95 percent and will test on 3,500 next year. However, it must be prepared to handle millions of requests, and Carlin recognizes that the government is providing harmful or unhelpful responses. Last year Air Canada was found responsible for its AI chatbot that gives bad advice to travelers. Carlin told BetaKit that the project is not guaranteed to leave beta or even officially launch. The potential cost of the project remains a concern.

“It seems like a pejorative thought, but how much should taxpayers pay for web navigation services?” – said Carlin. “We're building it just to see if it can be done.”

Feature image by Dennis Jarvis via Flickrunder license CC BY-SA 2.0.

Leave a Comment