Tech (AI) and Regulation: 1) Creators urge Ottawa to force disclosure of ‘black box’ AI system training; 2) As police increasingly use facial recognition technology, calls grow for regulations
Courtesy Barrie360.com and Canadian Press
By Anja Karadeglija, Published June 30, 2024
Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.
But AI companies maintain that using the material to train their systems doesn’t violate copyright, and say limiting its use would stymie the development of AI in Canada.
The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada’s copyright laws should address the emergence of generative AI systems like OpenAI’s ChatGPT.
Generative AI can create text, images, videos and computer code based on a simple prompt, but to do that, the systems must first study vast amounts of existing content.
In its submission to the government, Access Copyright argued most and potentially all large language models “are currently profiting from unauthorized use and reproduction of copyright protected works.”
It’s taking place in a “black box,” according to Access Copyright, which represents writers, visual artists and publishers.
“Rightsholders know it is happening, but due to the information asymmetry between themselves and AI platforms, they cannot determine who is conducting the activity, with whose works, and have no mechanism to stop it from happening.”
Music Canada, which represents the country’s major record labels, said last year, a fake AI-generated song mimicking the voices of Drake and The Weeknd “made one thing abundantly clear: AI models and systems have already ingested massive amounts of proprietary datasets without authorization from the source of the data or rightsholders.”
The Writers’ Guild of Canada asked the government to start with implementing basic disclosure and reporting obligations. It said developers have all the knowledge of the work that is being mined and how it’s being used, while creators have none of that information.
Some organizations have signed licensing deals with AI companies. But the Canadian Authors Association said rightsholders face “immense obstacles” in licensing their content “because they are being kept in the dark as to which of their works are being used” by which companies.
It asked Canada to clarify that text and data mining are subject to copyright laws.
Numerous lawsuits are underway in the United States over the use of copyrighted materials by generative AI systems, including one launched this week by the world’s biggest record labels against two AI music generators.
The Canadian Media Producers Association said legal cases illustrate the problem posed by a lack of transparency, citing one case in which the AI company argued the rightsholder couldn’t proceed with the infringement allegation unless they could specify the exact work used for training.
“Rightsholders will also undoubtedly face similar evidentiary issues as many datasets used to train Generative AI systems are purportedly destroyed after the initial training is complete,” it said.
The group said it’s an issue that “demands immediate attention” and asked the government to implement transparency requirements.
But AI companies maintain the kind of transparency rightsholders are asking for isn’t realistic.
Microsoft told the government training large-scale AI systems involves “vast volumes” of data, and companies shouldn’t have to keep records of that or disclose the content that is used for training.
“It would not be feasible to record such information and any such requirement would inhibit AI development,” it said.
The company argued it is not “copyright infringement to analyze works and learn concepts and facts.”
Google said AI training is already exempted under existing copyright law, though the government should adopt an exemption to make that explicit.
Google said requiring permission to use content for training purposes would expose competitively sensitive information and “would effectively block the development and use of large language models and other types of cutting-edge AI.”
It also said AI developers don’t have access to accurate information about copyright status.
“In fact, there is no such source of truth anywhere in the world. Thus, complying with disclosure rules may simply prove impossible from the start.”
Canadian AI company Cohere said using content for training AI systems works similarly to how an individual reads books to become more informed.
The company said the process doesn’t violate copyright, and argued that needs to be clear in the law. Otherwise, “Canada’s ambitions to be the home of world-leading AI companies and ecosystems” could be undermined.
The Council of Canadian Innovators, which represents the Canadian tech sector, said disclosure requirements would harm smaller companies as opposed to their Big Tech rivals. It warned this would “seriously hamper the potential of Canadian companies to scale significantly.”
2) As police increasingly use facial recognition technology, calls grow for regulations
Courtesy Barrie360.com and Canadian Press
By Joe Bongiorno, June 30,2024
Some police services in Canada are using facial recognition technology to help solve crimes, while other police forces say human rights and privacy concerns are holding them back from employing the powerful digital tools.
It’s this uneven application of the technology — and the loose rules governing its use — that has legal and AI experts calling on the federal government to set national standards.
“Until there’s a better handle on the risks involved with the use of this technology, there ought to be a moratorium or a range of prohibitions on how and where it can be used,” says Kristen Thomasen, law professor at the University of British Columbia.
As well, the patchwork of regulations on emerging biometric technologies has created situations in which some citizens’ privacy rights are more protected than others.
“I think the fact that we have different police forces taking different steps raises concerns (about) inequities and how people are treated across the country, but (it) also highlights the continuing importance of some kind of federal action to be taken,” she said.
Facial recognition systems are a form of biometric technology that use AI to identify people by comparing images or video of their faces — often captured by security cameras — with existing images of them in databases. The technology has been a controversial tool in police hands.
In 2021, the Office of the Privacy Commissioner of Canada found that RCMP violated privacy laws when they used the technology without the public’s knowledge. That same year Toronto police admitted some of its officers used facial recognition software without informing their chief. In both cases the technology was supplied by American company Clearview AI, whose database was composed of billions of images scraped from the internet without the consent of those whose images were used.
Last month, York and Peel police in Ontario said they had begun implementing facial recognition technology provided by multinational French company Idemia. In an interview, York police Const. Kevin Nebrija said the tools “help speed up investigations and to identify suspects sooner,” adding that in terms of privacy, “nothing has changed because security cameras are all around.”
Yet in neighbouring Quebec, Montreal police Chief Fady Dagher says the force will not adopt such biometric identification tools without a debate on issues ranging from human rights to privacy.
“It’s going to be something that is going to take a lot of discussion before we think about putting in place,” Dagher said in a recent interview.
Nebrija stressed that the department consulted the Privacy Commissioner of Ontario for best practices, adding that the images police will acquire will be “obtained lawfully,” either with the co-operation of security camera owners or by obtaining court orders for the images.
And although York police insist officers will seek judicial authority, Kate Robertson, a senior researcher at University of Toronto’s Citizen Lab, says that Canadian police forces have a history of doing just the opposite.
Since the revelations about Toronto police using Clearview AI between 2019 and 2020, Robertson said she is “still not aware of any police service in Canada that is obtaining prior approval from a judge to use facial recognition technology in their investigations.”
According to Robertson, getting the go-ahead from the court, usually in the form of a warrant, represents the “gold standard of privacy protection in criminal investigations.” This ensures a facial recognition tool, when used, is appropriately balanced against the right to free expression, freedom of assembly and other rights enshrined in the Charter.
While the federal government doesn’t have jurisdiction over provincial and municipal police forces, it can amend the Criminal Code to incorporate legal requirements for facial recognition software in the same way it updated the law to address voice recording technologies that could be used for surveillance.
In 2022, the federal, provincial and territorial heads of Canada’s privacy commissions called on lawmakers to establish a legal framework for appropriate use of facial recognition technology, including empowering independent oversight bodies, prohibiting mass surveillance and limiting how long images can be retained in databases.
Meanwhile, the federal Economic Development Department said Canadian law “could potentially” regulate corporate collection of personal information, under the Personal Information Protection and Electronic Documents Act, or PIPEDA.
“If, for example, a police force, including the RCMP, were to contract out activities that use personal information to a private company conducting commercial activities, then these activities could potentially be regulated by PIPEDA, including services related to facial recognition technologies,” the department said.
Quebec provincial police also have a contract with Idemia, but they wouldn’t say exactly how they use the company’s technology.
In an emailed statement, the police said its “automated face comparison system is not used to check the identity of individuals. This tool is used for criminal investigations and is limited to the data sheets of individuals who have been fingerprinted under the Identification of Criminals Act.”
AI governance expert Ana Brandusescu says Ottawa and the country’s police forces have not heeded the calls for better governance, transparency, and accountability in procurement of facial recognition technology.
“Law enforcement is not listening to academics, civil society experts, people with lived experience, people who are directly harmed,” she said.
