BLOG 8 AI “Breaking News” Misinformation in Canada, and What I Would Teach


This week, while browsing YouTube, I did encounter some very strange things. This is a story about leaked data linking some members of the Canadian Armed Forces to a “white only” dating website. CBC conducted an investigation into this matter, and ISD concluded that CBC analyzed the leaked data and discovered files related to active CAF personnel, and Canada’s Department of Defense is also conducting an investigation (Strategic Dialogue Institute, 2026). When I saw this topic on social media, the problem was not just about the actual investigation, because the speed of the problem outbreak was simply not enough to get the national authorities to provide the investigation results. The dissemination speed of “breaking news” posts is extremely fast. And accompanied by eye-catching short films with subtitles, AI voiceovers, and contextless screenshots. This type of post usually leads directly to extreme conclusions, such as’ the entire army is like this’, or they use it to provoke war. I also saw some reposts where people didn’t link to any reliable sources, just emotions.

Of course, this inferior tactic of warfare has existed before AI. Artificial intelligence is just a means of amplifying erroneous information and reducing its cost. It can turn a serious report into public opinion similar to government inaction, hatred towards the rich, and so on by taking it out of context. MediaSmarts states that AI driven misinformation and deepfakes are becoming increasingly difficult to avoid, and their Media Literacy Week campaign is actually stopping before sharing to ask, ‘Wait… what?’ (MediaSmarts, 2025). This is exactly what we need.

Impact on public opinion
People always see what they want to see, and if they only see the virus version, they may believe it and lose trust in public institutions, successfully falling into a trap. At the same time, bad people take advantage of the opportunity to spread extremist ideas, leading to social unrest. The opportunity side is that the functionality of artificial intelligence is becoming increasingly strong. But the risks of digital citizenship in Canada are significant. The spread speed of erroneous information, algorithmic bias leading to information cocoons and privacy, and so on. Dais’ report on adolescent privacy warns that young users can use genAI tools to share sensitive information, and privacy protection may not be in line with the new reality (The Dais, 2025b). This is a real Canadian education issue because students live online.

My suggestion: If the content causes you to experience intense emotional fluctuations, please pause for 10 seconds first; Subsequently, trace the source and search for links from the original publisher and authoritative institutions. Without links, it is considered unverified; Continuing to search for evidence, verify through at least two reliable Canadian sources, rather than relying solely on screenshots. At the same time, we should be vigilant about the risks of AI and be aware that AI dubbing and subtitles may tamper with the original meaning, and research has shown that existing AI tags often have difficulty changing people’s intuitive judgments when browsing quickly (The Dais, 2025a).

This is not about “don’t believe anything.” It’s about building habits. Canada is also actively talking about a renewed national AI strategy, and The Dais argues trust and responsible governance must be central, not just innovation (The Dais, 2025c). That fits education: teach skills, not only rules.

References (APA)

Institute for Strategic Dialogue. (2026, March 9). Leaked data ties Canadian Armed Forces members to a ‘white-only’ dating site (Media mention summarizing CBC investigation). Canadian military personnel identified on white supremacist dating site | CBC Accessibility

MediaSmarts. (2025, October 27). “Wait… What?” Media Literacy Week highlights growing concern over AI-driven misinformation. https://mediasmarts.ca/about-us/press-centre/wait-what-media-literacy-week-highlights-growing-concern-over-ai-driven-misinformation

The Dais. (2025a, March 7). Human or AI? Evaluating labels on AI-generated social media content. https://dais.ca/reports/human-or-ai/

The Dais. (2025b). (Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence. https://dais.ca/reports/generation-ai-safeguarding-youth-privacy-in-the-age-of-generative-artificial-intelligence/

The Dais. (2025c). Submission to the consultation on Canada’s renewed AI strategy. https://dais.ca/reports/submission-to-the-consultation-on-canadas-renewed-ai-strategy/

One Comment

  1. I think you wrote this article well because you didn’t just say that AI will create misinformation, but went further and pointed out that it will make these misinformation spread faster, package it more like the truth, and make it easier to drive emotions. I think it is also appropriate for you to cite CAF as an example, because it clearly shows that before the official investigation results are released, clips, subtitles and dubbing on social media can easily lead everyone to extreme judgments. The suggestions you put forward later are also very practical, such as stopping first, checking the source, and finding reliable Canadian media for cross-verification. These are not empty words, but can really help everyone establish the habit of judging information.

Leave a Reply to xiangyuyang Cancel reply

Your email address will not be published. Required fields are marked *