HOURdid you hear Solomon Ray's new album Faithful soul? His number one in the gospel charts—and is entirely created by artificial intelligence, just like the music artist behind it.
The idea that a popular Spotify artist might not be human is a satire on the attention economy itself: an ecosystem once built on authenticity and connection is now crowned by a synthetic voice designed for maximum lift. What does “soul” even mean if it's created using software trained on real music?
In the year when others”ghost artists,” like Velvet Sunsetalso made headlines, Canada is being forced to rethink an old problem. The fight is over platforms, algorithms and the ever-vague issue of Canadian content – or CanCon. Traditionally, CanCon's policy has been to ensure the continued survival of compelling, high-quality works by Canadian authors. The phrase “Made by Canadians” was the guiding principle.
CanCon emerged in a distinctly sovereignty-oriented era. In the 1960s and 1970s, Ottawa worried that broadcasters, studios and cultural products from the United States were flooding Canada's airwaves and shaping Canadian identity. CanCon quotas, the Canadian Radio-television and Telecommunications Commission, the Broadcasting Act, Telefilm and the Canadian Broadcasting Corporation were created as tools for self-determination. They were designed to give Canadian stories a space to exist and compete in a market where foreign players, especially American chains, had disproportionate control. Today, the threat is foreign artificial intelligence models and the flow of synthetic media, which are primarily destroying the very meaning of the word “Canadian”.
A few weeks ago, the federal broadcast regulator, CRTC, released a new definition of CanCon. It says humans, not artificial intelligence, must play a key creative role in production to meet the requirements. It makes sense. But something more fundamental requires analysis: the composition of the content itself. Before we can decide what is considered “Canadian,” we need to be sure that what we hear, watch, or read is, in fact, man-made.
The past month has brought a wave of stories reminding us how impossible this is becoming. Pop songs created by artificial intelligence tops the Billboard charts“ideal” AI tracks noted by BBC, Local disclosure Alleged AI-powered journalist impersonation scam and that viral Bloomberg column I beg Spotify to “stop the slop” before it reaches our ears.
People tend to reject fake content, and recent survey found that Canadians are increasingly “concerned” about AI-generated content. Consumers seem ready to revolt.
So, will the market magically fix the infusion of computer-generated material into our feeds? In some cases, this is where the wind blows. Last summer, YouTube announced it would not monetize AI-generated content, while Vine restart with a proud no-AI policy. In other words, some platforms are starting to draw real boundaries.
However, in other places, such as Spotify, the distinction does not exist. The music streaming platform is not obligated and cannot currently guarantee that your favorite song was performed by a real person. Meanwhile, music labels making deals with AI-driven streaming platforms and mountains of sloppy songs dominating TikTok. Pitchfork is already calling it a crisis.
This sloppiness is similar to the “firehose of lies” method of spreading disinformation: the system is overloaded with massive amounts of low-quality synthetic material so that authenticity becomes impossible to discern, and platforms default to whatever is cheapest and most scalable. Worse, public trust is eroded not only in the content itself, but also in the institutions tasked with curating it.
An optimist might argue that the free market will fix this. Audiences will demand human performance, interventions such as labels will differentiate material, and platforms will adapt. But market discipline only works when consumers can make informed choices, and that requires a level of disclosure that the system now actively hides.
Bridging this information gap has become an active issue for regulators. Quebec, under the Privacy Act, is currently the only province in Canada that requires government agencies to disclose information about the use of AI in decision-making processes. In the United Kingdom, government agencies publish their information on community center and full transparency reports. Recent statement from the Organization for Economic Co-operation and Development report on the use of AI in core government functions similarly calls on governments to act openly and with due regard to the public good.
Other jurisdictions recognize that lack of disclosure is a consumer protection issue. California landmark Generative AI: Training Data Transparency Act took effect on January 1, 2026, requiring developers of generative artificial intelligence systems to publish the source and ownership of their data sets; whether it includes copyrighted works and consumer information; any synthetic data; and how that data supports the goal of their AI model. The bill, part of California's broader push to manage core artificial intelligence systems, aims to overcome the black box problem by making its inputs transparent.
But what is also important for the average Internet user are shortcuts. European Union labeling requirements come into force in August 2026. To increase transparency and prevent fraud Article 50 EU Regulation imposes a requirement to flag content created or driven by artificial intelligence systems that can be perceived as real or human-generated, including text, images, voices and video.
In Massachusetts Artificial Intelligence Disclosure Actintroduced in February 2025, will require “clear and conspicuous notice” of AI involvement. A number of US states, including Pennsylvaniaare considering a similar law. Georgia has proposed the law requiring disclosure whenever technology is used in advertising or commerce.
If Canada wants its cultural policy to survive the age of slop, it will have to insist that what claims to be human—and Canadian—is vetted as such. In this context, sovereignty is not simply the protection of domestic production from foreign influence; it is about preserving the conditions under which the authorship of a person with a past and a place still matters. Otherwise, “Canadian content” risks becoming as empty a category as a faux gospel song climbing the charts—compelling, uplifting, but ultimately empty.
Adapted from “National interest» newsletter, with permission Canadian S.H.I.E.L.D. Institute.






