AI Articles: 1)No expert consensus on AI risks, trajectory ‘remarkably uncertain’: report; 2) Will AI replace doctors who read X-rays, or just make them better than ever? 3) Using AI Mastercard expects to find compromised cards before they get used by criminals
1) AI Articles: 1) No expert consensus on AI risks, trajectory ‘remarkably uncertain’: report
Courtesy: Barrie360.com and Canadian Press
By Anja Karadeglija
Professor of computer science Yoshua Bengio poses during an interview in Quebec City on May 1, 2024. THE CANADIAN PRESS/Jacques Boissinot
A major international report on the safety of artificial intelligence says experts can’t agree on the risk the technology poses — and it’s unclear whether AI will help or harm us.
The report, chaired by Canada’s Yoshua Bengio, concludes the “future trajectory of general-purpose AI is remarkably uncertain.”
It says a “wide range of trajectories” are possible “even in the near future, including both very positive and very negative outcomes.”
The report was commissioned at last year’s AI Safety Summit hosted by the United Kingdom, the first such global meeting on artificial intelligence.
The U.K. asked Bengio, dubbed a “godfather” of AI and who is scientific director at Mila, the Quebec AI Institute, to chair the report. It was released ahead of another global summit on AI, to be held in Seoul, South Korea, next week.
“We know that advanced AI is developing very rapidly, and that there is considerable uncertainty over how these advanced AI systems might affect how we live and work in the future,” Bengio wrote in the report.
The U.K. government said in a press release Friday the report is the “first-ever independent, international scientific report” on AI safety, and that it would “play a substantial role” in informing the discussions in South Korea next week.
A group of 75 experts contributed to the report, including a panel nominated by 30 countries, the European Union and the United Nations. The report released Friday is an interim one, with a final version expected by the end of the year.
It focuses on general-purpose AI systems, such as OpenAI’s ChatGPT, which can generate text, images and videos based on prompts.
The report says the experts “continue to disagree on several questions, minor and major, around general-purpose AI capabilities, risks and risk mitigations.”
One of the areas of debate is the likelihood of “risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI.”
The report outlines a number of risks, including the harm AI can cause through fake content, disinformation and fraud, as well as cyberattacks. It also flags the risks bias in AI can cause, particularly in “high-stakes domains such as health care, job recruitment and financial lending.”
One potential scenario is that humans will lose command of artificial intelligence, and not be able to control the technology even if it may be causing harm.
The report said there is consensus that the current general-purpose technology doesn’t pose that risk, but some experts believe that ongoing work to develop autonomous AI, which can “act, plan and pursue goals,” could lead to such an outcome.
“Experts disagree about how plausible loss-of-control scenarios are, when they might occur and how difficult it would be to mitigate them,” the report says.
2) Will AI replace doctors who read X-rays, or just make them better than ever?
Courtesy Barrie360.com and Canadian Press
Matthew Perrone – The Associated Press
How good would an algorithm have to be to take over your job?
It’s a new question for many workers amid the rise of ChatGPT and other AI programs that can hold conversations, write stories and even generate songs and images within seconds.
For doctors who review scans to spot cancer and other diseases, however, AI has loomed for about a decade as more algorithms promise to improve accuracy, speed up work and, in some cases, take over entire parts of the job. Predictions have ranged from doomsday scenarios in which AI fully replaces radiologists, to sunny futures in which it frees them to focus on the most rewarding aspects of their work.
That tension reflects how AI is rolling out across health care. Beyond the technology itself, much depends upon the willingness of doctors to put their trust — and their patient’s health — in the hands of increasingly sophisticated algorithms that few understand.
Even within the field, opinions differ on how much radiologists should be embracing the technology.
“Some of the AI techniques are so good, frankly, I think we should be doing them now,” said Dr. Ronald Summers, a radiologist and AI researcher at the National Institutes of Health. “Why are we letting that information just sit on the table?”
Summers’ lab has developed computer-aided imaging programs that detect colon cancer, osteoporosis, diabetes and other conditions. None of those have been widely adopted, which he attributes to the “culture of medicine,” among other factors.
Radiologists have used computers to enhance images and flag suspicious areas since the 1990s. But the latest AI programs can go much further, interpreting the scans, offering a potential diagnosis and even drafting written reports about their findings. The algorithms are often trained on millions of X-rays and other images collected from hospitals.
Across all of medicine, the FDA has OK’d more than 700 AI algorithms to aid physicians. More than 75% of them are in radiology, yet just 2% of radiology practices use such technology, according to one recent estimate.
For all the promises from industry, radiologists see a number of reasons to be skeptical of AI programs: limited testing in real-world settings, lack of transparency about how they work and questions about the demographics of the patients used to train them.
“If we don’t know on what cases the AI was tested, or whether those cases are similar to the kinds of patients we see in our practice, there’s just a question in everyone’s mind as to whether these are going to work for us,” said Dr. Curtis Langlotz, a radiologist who runs an AI research centre at Stanford University.
To date, all the programs cleared by the FDA require a human to be in the loop.
In early 2020, the FDA held a two-day workshop to discuss algorithms that could operate without human oversight. Shortly afterwards, radiology professionals warned regulators in a letter that they “strongly believe it is premature for the FDA to consider approval or clearance” of such systems.
But European regulators in 2022 approved the first fully automatic software that reviews and writes reports for chest X-rays that look healthy and normal. The company behind the app, Oxipit, is submitting its U.S. application to the FDA.
The need for such technology in Europe is urgent, with some hospitals facing months-long backlogs of scans due to a shortage of radiologists.
In the U.S., that kind of automated screening is likely years away. Not because the technology isn’t ready, according to AI executives, but because radiologists aren’t yet comfortable turning over even routine tasks to algorithms.
“We try to tell them they’re overtreating people and they’re wasting a ton of time and resources,” said Chad McClennan, CEO of Koios Medical, which sells an AI tool for ultrasounds of the thyroid, the vast majority of which are not cancerous. “We tell them, ‘Let the machine look at it, you (review and) sign the report and be done with it.’”
Radiologists tend to overestimate their own accuracy, McClennan says. Research by his company found physicians viewing the same breast scans disagreed with each other more than 30% of the time on whether to do a biopsy. The same radiologists even disagreed with their own initial assessments 20% of the time, when viewing the same images a month later.
About 20% of breast cancers are missed during routine mammograms, according to the National Cancer Institute.
And then there’s the potential for cost savings. On average, U.S. radiologists earn over $350,000 annually, according to the Department of Labor.
In the near term, experts say AI will work like autopilot systems on planes — performing important navigation functions, but always under the supervision of a human pilot.
That approach offers reassurances to both doctors and patients, says Dr. Laurie Margolies, of Mount Sinai hospital network in New York. The system uses Koios breast imaging AI to get a second opinion on breast ultrasounds.
“I will tell patients, ‘I looked at it, and the computer looked at it, and we both agree,’” Margolies said. “Hearing me say that we both agree, I think that gives the patient an even greater level of confidence.”
The first large, rigorous studies testing AI-assisted radiologists against those working alone give hints at the potential improvements.
Initial results from a Swedish study of 80,000 women showed a single radiologist working with AI detected 20% more cancers than two radiologists working without the technology.
In Europe, mammograms are reviewed by two radiologists to improve accuracy. But Sweden, like other countries, faces a workforce shortage, with only a few dozen breast radiologists in a country of 10 million people.
Using AI instead of a second reviewer decreased the human workload by 44%, according to the study.
Still, the study’s lead author says it’s essential that a radiologist make the final diagnosis in all cases.
If an automated algorithm misses a cancer, “that’s going to be very negative for trust in the caregiver,” said Dr. Kristina Lang of Lund University.
The question of who could be held liable in such cases is among the thorny legal issues that have yet to be resolved.
One result is that radiologists are likely to continue double-checking all AI determinations, lest they be held responsible for an error. That’s likely to wipe out many of the predicted benefits, including reduced workload and burnout.
Only an extremely accurate, reliable algorithm would allow radiologists to truly step away from the process, says Dr. Saurabh Jha of the University of Pennsylvania.
Until such systems emerge, Jha likens AI-assisted radiology to someone who offers to help you drive by looking over your shoulder and constantly pointing out everything on the road.
“That’s not helpful,” Jha says. “If you want to help me drive then you take over the driving so that I can sit back and relax.”
3) Using AI Mastercard expects to find compromised cards before they get used by criminals
Courtesy Barrie360.com and Canadian Press
Ken Sweet – The Associated Press
Mastercard said Wednesday it expects to be able to discover that your credit or debit card number has been compromised well before it ends up in the hands of a cybercriminal.
In its latest software update rolling out this week, Mastercard is integrating artificial intelligence into its fraud-prediction technology that it expects will be able to see patterns in stolen cards faster and allow banks to replace them before they are used by criminals.
“Generative AI is going to allow to figure out where did you perhaps get your credentials compromised, how do we identify how it possibly happened, and how do we very quickly remedy that situation not only for you, but the other customers who don’t know they are compromised yet,” said Johan Gerber, executive vice president of security and cyber innovation at Mastercard, in an interview.
Mastercard, which is based in Purchase, New York, says with this new update it can use other patterns or contextual information, such as geography, time and addresses, and combine it with incomplete but compromised credit card numbers that appear in databases to get to the cardholders sooner to replace the bad card.
The patterns can now also be used in reverse, potentially using batches of bad cards to see potentially compromised merchants or payment processors. The pattern recognition goes beyond what humans could do through database inquiries or other standard methods, Gerber said.
Billions of stolen credit card and debit card numbers are floating in the dark web, available for purchase by any criminal. Most were stolen from merchants in data breaches over the years, but also a significant number have been stolen from unsuspecting consumers who used their credit or debit cards at the wrong gas station, ATM or online merchant.
These compromised cards can remain undetected for weeks, months or even years. It is only when the payment networks themselves dive into the dark web to fish for stolen numbers themselves, a merchant learns about a breach, or the card gets used by a criminal do the payments networks and banks figure out a batch of cards might be compromised.
“We can now actually proactively reach out to the banks to make sure that we service that consumer and get them a new card in her or his hands so they can go about their lives with as little disruption as possible,” Gerber said.
The payment networks are largely trying to move away from the “static” credit card or debit card numbers — that is a card number and expiration date that is used universally across all merchants — and move to unique numbers for specific transactions. But it may take years for that transition to happen, particularly in the U.S. where payment technology adoption tends to lag.
While more than 90% of all in-person transactions worldwide are now using chip cards, the figure in the U.S. is closer to 70%, according to EMVCo, the technological organization behind the chip in credit and debit cards.
Mastercard’s update comes as its major competitor, Visa Inc., also looks for ways to make consumers discard the 16-digit credit and debit card number. Visa last week announced major changes to how credit and debit cards will operate in the U.S., meaning Americans will be carrying fewer physical cards in their wallets, and the 16-digit credit or debit card number printed on every card will become increasingly irrelevant.
