Artificial Intelligence – AI: 1) Common Worries Are Privacy And Concerns That Society Is Going To Become Too Dependent On AI 2)With “Superhuman” Artificial Intelligence Looming, Canada Needs Law Now: AI Pioneer
1) Common Worries Are Privacy And Concerns That Society Is Going To Become Too Dependent On AI
Courtesy of Barrie360.com and Canadian PressPublished: Feb 9th, 2024
Anja Karadeglija, The Canadian Press
Despite worries artificial intelligence lacks empathy and could be coming to steal their jobs, a growing number of Canadians are turning to AI tools, a new poll suggests.
Thirty per cent of Canadians now use artificial intelligence tools, the Leger poll suggested, up from 25 per cent a year ago, though two−thirds of respondents said the prospect of having them in their lives is scary.
The poll of 1,614 Canadians shows a distinct divide between how younger and older people view AI — 58 per cent of those 18 to 34 reported using AI tools, compared to just 13 per cent of those 55 and older.
Christian Bourque, executive vice−president of Leger, said the number of people who have been exposed to or interacted with AI is probably higher than reported, because some individuals may not be aware they’re using it. A website might have a chatbot introduce themselves as Dave, for example — and the user may not realize Dave isn’t a real person.
Respondents aged 18 to 34 were more familiar with the concept of chatbots, or automated chat assistants on websites, with 64 per cent reporting familiarity compared to 38 per cent for those older than 55. The poll does not have a margin of error because online polls aren’t considered truly random samples.
Those who have used AI services or tools generally had a good experience with them, with 71 per cent rating them as good or excellent.
But Canadians, in general, appear to have mixed feelings, with 31 per cent of respondents taking the position they’re good for society and 32 per cent that they’re bad for society. Where respondents stood on the issue varied with age; 42 per cent of younger respondents thought AI tools were good for society, compared to only 23 per cent of older Canadians.
Some of the common worries are privacy and concerns that society is going to become too dependent on AI, which 81 per cent of those polled agreed with. Three−quarters said AI tools lack the emotion and empathy required to make good decisions and threaten human jobs.
Bourque said those results indicate that “people have fairly deep−rooted fears about the use of AI in our society.”
Most, or 58 per cent, trust AI to adjust their thermostat, play music or vacuum their house, while slightly fewer, 53 per cent, trust using facial recognition or biometrics to access personal information.
Canadians are more cautious about using AI tools to create content for important projects at school or work, with only 37 trusting them in that context. The age gap was evident in that question, too, with 44 per cent of those 18 to 34 having confidence in the tech for those projects, compared to 29 per cent of those 55−plus.
Similarly, nearly half of younger respondents were OK with tech platforms using AI to decide what content to show on social media, compared to 23 per cent of older Canadians.
The trust drops when it comes to personal safety. Fewer than one−quarter had faith in AI to transport them in a vehicle, though the age gap was evident again, with 28 per cent of the youngest demographic trusting AI driving compared to only 16 per cent of the oldest.
A similar divide was evident when it comes to relying on AI to find a life partner online, something a quarter of respondents 18 to 34 trusted the tech to do, compared to only 10 per cent of those older than 55.
2)With “Superhuman” Artificial Intelligence Looming, Canada Needs Law Now: AI Pioneer
MORE CANADIANS USING AI TOOLS, DESPITE ‘DEEP−ROOTED’ FEARS ABOUT THE TECH: POLL
Courtesy Barrie360.com and Canadian Press Published: Feb 5th, 2024
Ottawa
The federal government must move urgently to regulate artificial intelligence, says a top AI pioneer, warning the technology’s current trajectory poses major societal risks.
Yoshua Bengio, dubbed a “godfather” of AI, told members of Parliament Monday that Ottawa should put a law in place immediately, even if that legislation is not perfect.
The scientific director at Mila, the Quebec AI Institute, says a “superhuman” intelligence that is as smart as a human being could be developed within the next two decades — or even the next few years.
“We’re not ready,” Bengio said.
One short-term risk of AI is the use of deepfake videos to spread disinformation, he said. They use AI to make it look as though a public figure is saying something they didn’t, or doing something that never happened.
The technology can also be used to interact with people through text or dialogue “in a way that can fool a social-media user and make them change their mind on political questions,” said Bengio.
“There’s real concern about use of AI in politically oriented ways that go against the principles of our democracy.”
A year or two down the road, the worry is that more-advanced systems can be used for cyberattacks.
AI systems are getting better and better at programming.
“When these systems get strong enough to defeat our current cyber defences and our industrial digital infrastructure, we are in trouble,” Bengio said.
“Especially if these systems fall in the wrong hands.”
The House of Commons industry committee where Bengio testified Monday is studying a Liberal government bill that would update privacy law and begin regulating some artificial intelligence systems.
The bill as it’s drafted would give the government time to develop regulations, but Bengio says some provisions should take effect right away.
“With the current approach, it would take something like two years before enforcement (is) possible,” he said.
One of the initial rules he said he wants to see implemented is a registry that would require systems with a specified level of capability to report to the government.
Bengio said that would put the responsibility and cost of demonstrating safety on large tech companies developing these systems, rather than on taxpayers.
Bill C-27 was first drafted in 2022 to target what are described as “high-impact” AI systems.
Bengio said the government should change the legislation’s definition of “high-impact” to include technology that poses national security and societal threats.
That could include any AI systems that bad actors could use to design dangerous cyberattacks and weapons, or systems that find ways to self-replicate despite programming instructions to the contrary.
Generative AI systems like ChatGPT, which can create text, images and videos, emerged for widespread public use after the bill was first introduced.
The government says it plans to amend the legislation to reflect that.
Liberals say they aim to require companies beyond such systems to take steps ensuring the content they create is identifiable as AI-generated.
Bengio said it’s “very important to cover general-purpose AI systems because they’re also the ones that could be the most dangerous if misused.”
Catherine Régis, a professor at the Université de Montréal, also said at the committee meeting Monday that the government needs to act urgently, citing recent “meteoric developments in AI, which we’re all familiar with.”
Speaking in French, she pointed out that AI regulation is a global effort, and Canada must figure out what to do at the national level if it wants to have a voice.
“Decisions will be taken on a global scale that’ll have an impact on all countries,” she said.
Establishing a clear and solid vision at the Canadian level is “one of the essential conditions to play a credible structuring and influential role in global governance.”
