|

AI: 1) AI meeting tools might be time savers but beware their risks: privacy experts; 2)Growing more complex by the day: How should journalists govern use of AI in their products?; 3) AI song generator startups Suno and Udio angered the music industry. Now they’re hoping to join it

1) AI meeting tools might be time savers but beware their risks: privacy experts

Courtesy Barrie360.com and Canadian Press

By Tara Deschamps, March 1, 2026

Someone may be listening in on your next meeting — and it’s not your micromanaging boss a few desks over, nor your spouse or kids across the room.

Artificial intelligence-based tools like Fireflies.ai, Otter.ai, Trint and Fathom are increasingly being used to record, transcribe and pump out summaries of meetings. 

Sometimes their presence is overt — another participant panel on a video conference screen or a flashing recording indicator in the corner — but other times, users have them quietly running in the background.

The wide gamut of tools and lack of transparency sometimes surrounding them are contributing to a privacy minefield, experts say.

“We’re sort of entering this phase where I don’t think you can go a week without hearing a news story around something that’s gone wrong with AI or some data breach somewhere,” said Nicolas Joubert, a Winnipeg-based partner at law firm MLT Aikins.

Still, AI notetakers, transcription and summary tools have exploded in popularity in recent years with boards, doctors and young professionals becoming some of the most fervent adopters.

They like the technology because it takes some of the drudgery out of meetings, allows participants to focus more on the conversation at hand and later, reduces the time it takes to recap what happened. 

Kael Campbell, president of Red Seal Recruiting Solutions Ltd., said his Victoria, B.C.-based hiring firm has used interviewing platform HoneIt for about four years. He likes that the transcripts it produces are often more comprehensive than his own notes.

“I would not make full verbatim notes and now, we have full transcripts, so if I had a client ask about very specific stuff, I was able to go back,” he said.

But the tools have just as many risks as perks. Experts say they often create a huge volume of data — sometimes riddled with mistakes and personal information — and have a whole host of privacy issues.

The risks begin with what gets recorded. These tools don’t know the difference between the typical small talk about the weather, hobbies or politics that punctuates meetings and the actual substance of a conversation. Thus, they capture and summarize unnecessary but often very personal details, Joubert said.

They also can’t tell when a meeting goes “in camera” — a term used to describe private discussions, like some board and executive meetings or portions of court proceedings, that should not be taped — and keep recording.

“All of a sudden, all those in-camera discussions have just been distributed to the entire meeting mailing list and that’s obviously a huge problem,” said Teresa Scassa, Canada Research Chair in information law and policy at the University of Ottawa.

Sometimes the notes aren’t accurate either. When the sound isn’t great or systems aren’t familiar with words, the tools guess what was said, rather than noting that part of the meeting was inaudible.

They can also hallucinate — a term used to describe when AI systems generate material that is untrue.

The potential problems stretch beyond the performance of these tools.

Data captured by their products can be used to train new AI models and often recordings and transcriptions get stored in the cloud, making them vulnerable to leaks and breaches.

“Is it storing data outside of the country? Is it processing data and selling it to third parties, on a non-aggregated, identified basis? Is it using your data to train other models? Do other parties or customers or subscribers or end users get access to your data when it is incorporated in the system?” said Joubert.

“There’s all sorts of questions you have to ask yourself and get comfortable with the answers before you should really think about implementing these systems.”

When something goes wrong, it can be news to meeting participants that a recording was even made. In some cases, they only find out when their information was contained in an email blast or posted online, said Scassa.

She points to a September 2024 example at an Ontario hospital.

A letter the province’s information and privacy commissioner posted online last year highlighted a virtual meeting between doctors at the unnamed hospital to discuss seven patients.

One of the invited doctors had left the hospital in June 2023. Because he was using his personal email for the meeting and had installed Otter.ai on his personal device, the transcription service was able to access the meeting and record it without any attendees noticing. 

The inadvertent recording was discovered later when Otter.ai automatically emailed a meeting summary and transcript of the conversation about patients to 65 invitees. 

The hospital reported the breach to Ontario’s privacy commissioner, demanded recipients delete the missive and updated its policies around the tools.

Joubert and Scassa agree what happened at the hospital should be a lesson for anyone using AI meeting tools.

They recommend users review their settings carefully, so they’re not inadvertently recording and distributing material and say whenever someone uses an AI tool, they should notify other participants and take any of their qualms into consideration.

Red Seal Recruiting Solutions’ clients aren’t bothered by AI tools, but some want the notes deleted later. If anyone had an objection, Campbell said the company would revert to typing or handwriting notes, even though it would take more time. 

His staff notify meeting participants whenever meeting tools are being used and verify any output for accuracy before disseminating it. 

Before implementing the software, the company looked closely at the terms and verified they comply with not just Canadian legislation but also European regulations, which are the world’s strictest. 

Joubert advises people and companies to pore through these agreements to understand what kind of access and rights they’re granting the software and what can go wrong.

“Perhaps not surprisingly, a lot of the big vendors will tend to want to push a lot of that risk and responsibility to the customer,” said Joubert. 

While he sees the value in AI meeting tools and knows a lot of what gets said in meetings may seem innocuous, he reminds people there’s a danger whenever that info can be accessed by someone else or combined with other details from social media or data leaks. 

“You probably wouldn’t want to stand on a busy street corner with a sandwich board with that information during rush hour,” he said. “It’s really no different.”

2)Growing more complex by the day: How should journalists govern use of AI in their products?

Courtesy Barrie360.com and The Associated Press

By David Bauder, March 2, 2026

Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence plays a major role — grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who will be left behind.

These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines earlier this month. They’re inching toward a potential strike, in what is believed would be the first such job action in the news business where how to deal with AI is the chief sticking point.

Few expect this dispute will be the last.

AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.

Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.

Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.

The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica’s journalists, negotiating its first contract with the the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.

Along with holding informational pickets, union members pledged overwhelmingly that they would be willing to strike without a satisfactory agreement, said Jen Sheehan, spokeswoman for the New York Guild, the union that represents many journalists in the city.

“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.

ProPublica has rejected its requests, the union said. Insight into why can be found in an essay, “Something Big is Happening,” that circulated widely this month. Author and investor Matt Shumer, who said he’s spent six years building an AI startup, wrote that the technology is advancing so quickly that “if you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”

The reluctance of news outlets to put policies on record

Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.

Rather than make promises that can’t be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesman Tyson Evans said. In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.

“We’re approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”

Fifty-seven of 283 contracts at U.S. news organizations negotiated by the NewsGuild-USA contain language related to artificial intelligence, said Jon Schleuss, president of the union that represents more journalists than any in the country. The first such deals happened in 2023, and The Associated Press was one pioneer. He wants provisions in more contracts.

It won’t be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of U.S. outlets have done so.

“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”

The guild is pushing for contracts that guarantee AI won’t eliminate jobs. That’s no surprise; unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used as a way to prevent errors and help an outlet build trust with its readers.

“Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it’s a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”

Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote this month of his disgust with a recent college graduate who turned down a job offer because the person had been taught that AI was bad for journalism.

Quinn’s newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human will edit what the computer spits out, an integral part of the process — a reporter using his or her judgment about how to tell a story — has been stripped from their hands. Quinn defended it as the best use of limited resources.

A ‘Catch 22’ in public attitudes toward AI disclosure

Research shows that a vast majority of American consumers believe that it’s very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota. But here’s the rub: Such disclosure makes them trust the outlet’s stories less, not more.

A significant minority — 30% in a study Toff conducted last year — doesn’t want AI used in journalism at all.

Telling a reader that AI was used is not as simple as it sounds. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter’s Mahadevan said.

Two lawmakers in New York state — the nation’s publishing capital — introduced legislation this month requiring clear disclaimers when artificial intelligence is used in published content. There’s no immediate word on its chances for passage, but both sponsors are Democrats in a legislature controlled by that party.

Mahadevan believes it’s fair to have policies that requires human involvement — editing to prevent slip-ups, for example. But even these declarations are open to interpretation, he said. If an outlet uses chatbots to answer reader questions, are they being edited by a human being?

“Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it’s important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”

3) AI song generator startups Suno and Udio angered the music industry. Now they’re hoping to join it

Courtesy Barrie360.com and The Associated Press

By Matt O’Brien and Rodrique Ngowi, February 28, 2026

Suno CEO Mikey Shulman pulls up a chair to the recording studio desk where a research scientist at his artificial intelligence company is creating a new song.

The flute line sounds promising.

The percussion needs work.

Neither of them is playing an instrument. They type some descriptive words – Afrobeat, flute, drums, 90 beats per minute – and out comes an infectious rhythm that livens up the 19th century office building where Suno is headquartered in Cambridge, Massachusetts. They toggle some editing tools to refine the new track.

Much like early experiences with ChatGPT or AI text-to-image generators, trying to make an AI-generated song on platforms like Suno or its rival, Udio, can seem a little like magic. It takes no musical skills, practice or emotional wellspring to conjure up a new tune inspired by almost any of the world’s musical traditions.

But the process of training AI on beloved musicians of the past and present to produce synthetic approximations of their work has angered the music industry and brought much of its legal power against the two startups.

Now, after their users have flooded the internet with millions of AI-generated songs, some of which have found themselves on streaming services like Spotify, the leaders of Suno and New York-based Udio are trying to negotiate with record labels to secure a foothold in an industry that shunned them.

“We have always thought that working together with the music industry instead of against the music industry is the only way that this works,” said Shulman, who co-founded Suno in 2022. “Music is so culturally important that it doesn’t make sense to have an AI world and a non-AI world of music.”

Sony Music, Universal Music and Warner Records sued the two startups for copyright infringement in 2024, alleging that they were exploiting the recorded works of their artists.

Since then, the pair have strived to make peace with the industry. Suno, now valued at $2.45 billion, last year struck a settlement with Warner, and Udio has signed licensing agreements with Warner, Universal and independent label Merlin. Only one major label, Sony, has not settled with either startup as the lawsuits move forward in Boston and New York federal courts. Suno also faces legal challenges in Europe brought by groups representing music creators.

The first of the settlement deals, between Udio and Universal, led to an exodus of frustrated Udio users who were blocked from downloading their own AI-generated tracks. But Udio CEO Andrew Sanchez said he’s optimistic about what the future will bring as his company adapts its business model to let fans of willing artists use AI to play with and potentially alter their works.

“Having a close relationship with the music industry is elemental to us,” Sanchez said in an interview. “Users really want to have an anchor to their favorite artists. They want to have an anchor to their favorite songs.”

Many professional musicians are skeptical. Singer-songwriter Tift Merritt, co-chair of the Artists Rights Alliance, recently helped organize a “Stealing Isn’t Innovation” campaign by artists — including Cyndi Lauper and Bonnie Raitt — to urge AI companies to pursue licensing deals and partnerships rather than build platforms without regard for copyright law.

“The economy of AI music is built totally on the intellectual property, globally, of musicians everywhere without transparency, consent, or payment. So, I know they value their intellectual property, but ours has been consumed in order to replace us,” Merritt said in an interview in Raleigh, North Carolina.

Shulman contends technology “evolves very often faster than the law,” and his company tries to be thoughtful about “not breaking the law” but also “deliver products that the world really wants.”

Suno CEO doesn’t really think ‘people don’t enjoy’ making music

When the music industry first confronted Suno over alleged copyright infringement, the company’s antagonistic response alienated professionals like Merritt.

Symbolizing the divide was a clip last year in which Shulman was quoted as saying, “it’s not really enjoyable” to make music most of the time. Shulman started learning piano at age 4 but later dropped it. He took up bass guitar at 12, playing in rock bands in high school and college. He said that experience gave him some of the best moments of his life.

“You need to get really good at an instrument or really good at a piece of production software,” Shulman said on the “The Twenty Minute VC” podcast. “I think the majority of people don’t enjoy the majority of the time they spend making music.”

“Clearly, I wish I had said different words,” Shulman told the AP. The context, he added, was that “to produce perfect music takes a lot of repetitions and not all of those minutes are the most enjoyable bits of making music. On the whole, obviously, music is amazing. I play music every day for fun.”

When the music industry first confronted Suno over alleged copyright infringement, the company’s antagonistic response alienated professionals like Merritt.

Symbolizing the divide was a clip last year in which Shulman was quoted as saying, “it’s not really enjoyable” to make music most of the time. Shulman started learning piano at age 4 but later dropped it. He took up bass guitar at 12, playing in rock bands in high school and college. He said that experience gave him some of the best moments of his life.

“You need to get really good at an instrument or really good at a piece of production software,” Shulman said on the “The Twenty Minute VC” podcast. “I think the majority of people don’t enjoy the majority of the time they spend making music.”

“Clearly, I wish I had said different words,” Shulman told the AP. The context, he added, was that “to produce perfect music takes a lot of repetitions and not all of those minutes are the most enjoyable bits of making music. On the whole, obviously, music is amazing. I play music every day for fun.”

Udio CEO pitches his company as the friendly alternative

Sanchez, the Udio CEO, also loves making music. He’s an opera-loving tenor who’s sung in choirs and grew up crooning Luciano Pavarotti in his family’s home in Buffalo, New York.

Founded in 2023 by a group that included several AI researchers from Google, the startup now employs about 25 people. It has fewer users and raised less venture capital than Suno, which likely gave Udio a stronger incentive to be first to settle with record labels, said copyright lawyer Brandon Butler.

“A service (like Suno) that gets more venture backing is in some sense hungrier to find revenue streams and more on the hook to all those backers to make sure that they achieve profitability, which would make settling and compromising less attractive,” said Butler, director of the copyright advocacy group Re:Create. “Whereas a company with fewer backers, with less capital, with less access would be weaker and less able to resist the risk that they’re incurring by being involved in litigation.”

Still, Udio embraces its underdog status.

“So many tech companies actively cultivate this I-am-a-tech-company-crusader and that’s part of their identity,” Sanchez said. “That alienates people who are creative and I am uniformly opposed to that.”

Sanchez said he knows not every artist is going to embrace AI, but he hopes those who leave the room after talking with him realize he’s not imposing a kind of “AI bravado.”

“If you took what we’re doing and pretended that the word AI wasn’t a part of it, people would be like, ‘Oh my gosh. This is so cool.’”

Some see potential in AI-assisted music creation

In the basement office of his Philadelphia, Mississippi home, Christopher “Topher” Townsend is a one-man band, making and marketing Billboard-chart-topping gospel music — none of which he sings himself — and doing it in record time.

The rapper, whose lyrics reflect his political conservatism, downloaded Suno in October and, within days, created Solomon Ray, a fictional singer that Townsend calls an extension of himself.

Townsend uses ChatGPT to write lyrics, Suno to generate songs and other AI tools to create cover art and promotional videos under the Solomon Ray name.

“I can see why artists would be afraid,” Townsend said. ”(Solomon Ray) has an immaculate voice. He doesn’t get sick. You know, he doesn’t have to take leave, he doesn’t get injured and he can work faster than I can work.”

Trying to dispel that fear for aspiring artists is Jonathan Wyner, a professor of music production and engineering at the Berklee College of Music in Boston, who sees generative AI as just another tool.

“To the creative musician, AI represents both enormous potential benefits in terms of streamlining things and frankly making kinds of music-making possible that weren’t possible before, and making it more accessible to people who want to make music,” he said.

Such a vision remains a tough sell for artists who feel their work has already been exploited. Merritt says she’s particularly concerned about labels making deals with AI companies that leave out independent artists. An open letter she co-signed this week says “many in our community are embracing responsible AI as a tool for creation” but targets Suno as a “smash and grab” business that artists should avoid.

“Artists need to know the difference – all AI platforms are not the same, and Suno, which is being sued for copyright infringement, is not a platform artists should trust,” says the letter from Merritt and six others.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *