To return to your region please select

Loading...

Read the script below

Tyla: Hello and welcome to Safeguarding Soundbites, with me, Tyla.

Natalie: And me, Natalie. This week, we’re talking about Meta in the Senate…

Tyla: Worrying search engines result…

Natalie: And the future of disposable vapes.

Tyla: Kick us off, Natalie?

Natalie: Sure!

Our first story this week comes from America, where social media leaders, including Mark Zuckerberg of Meta, faced scrutiny at a recent Senate hearing regarding their platforms’ handling of child safety. Senators expressed strong concerns about the potential dangers children face online, including predators, harmful content, and cyberbullying.

The hearing presented emotional testimonies via video from young victims and heartbreaking stories shared by senators, highlighting the real-world consequences of inadequate online protections. Senators emphasised the need for stronger measures, citing features on platforms like Instagram that warn users about potentially abusive content but still allow access.

Meta have announced plans to block harmful content from users under 18 and to direct them to mental health resources when engaging with posts about self-harm or eating disorders. Additionally, Zuckerberg offered an apology to families affected by harmful online content. However, some experts remain sceptical, questioning the effectiveness of these measures without broader changes to platform algorithms and enforcement strategies.

Tyla: Interestingly, this comes right after the news that Meta have announced a new safety tool designed to prevent the sending and receiving of nude images in private messages, including encrypted chats on Facebook and Instagram.

Meta said that the tool, which will be optional and likely available to adults as well, is intended to protect users, particularly women and young people, from unsolicited nude images. While machine learning will be used to identify potential nudity, Meta stresses that this technology is not suitable for detecting child sexual abuse material due to potential inaccuracies.

Natalie: It’ll be interesting to see how effective it is, we will certainly keep an eye on reports about how this new safety feature is working.

Tyla: We will. Onto our next story now and a recent study by Ofcom has shone a light on a troubling reality: search engines can inadvertently act as gateways to harmful self-injury content. The research unveiled a worrying connection between specific online searches and potentially dangerous web pages, images, and videos.

The investigation revealed a particularly concerning trend in image searches, where a staggering 50% of results were categorised as extreme or harmful. Additionally, users who employed specific, often-obscure search terms were a striking six times more likely to stumble upon self-injury content compared to standard searches.

But it wasn’t all bleak. The study also found that one in five search results (22%) offered a glimmer of hope. These results directly linked to resources geared towards getting help, including mental health services and educational materials highlighting the dangers of self-harm.

With the Online Safety Act coming into effect, search engines face the responsibility of minimising children’s exposure to harmful content on their platforms.

Natalie: Let’s hope we see some positive changes soon. Our next story also relates to the Online Safety Act, in particular deepfake pornography which is now illegal across the UK. This week, social media platform X – formerly Twitter – had to block searches for singer Taylor Swift following the viral spread of explicit deepfakes of Swift, generated using artificial intelligence and amassing millions of views.

X enacted a temporary block on searches for “Taylor Swift” and reiterated its zero-tolerance policy towards non-consensual nudity and stated they are actively removing the identified images and taking action against accounts responsible for posting them. The graphic nature of the content drew attention to the potential dangers of deepfakes, which can be used to create and spread misinformation, damage reputations, and even facilitate harassment.

A 2023 study revealed a staggering 550% increase in the creation of doctored images since 2019, largely driven by advancements in AI technology. The ease of creating and sharing deepfakes raises significant questions and concerns.

Tyla: Really, really concerning, Natalie.

Now, if you’ve been watching, reading or listening to the news, you’ve likely heard our next story – that the UK government is planning to bring in a ban on disposable vapes. Disposable vapes, often marketed with appealing flavours and designs, have raised concerns due to their ease of access and potential health risks, particularly the highly addictive nature of the nicotine they contain.

Alongside the ban, additional measures were announced, including marketing restrictions, so implementing stricter regulations to prevent the marketing of vapes to children, making them less appealing and accessible to this vulnerable demographic. They also plan to introduce harsher penalties for shops caught selling vapes illegally to minors and to launch a public consultation to gather feedback.

While health organisations like the British Lung Foundation applaud the ban as a necessary step towards protecting children’s health, the vaping industry expresses concerns about potential negative consequences. Some argue that the ban might create a black market and advocate for increased enforcement of existing regulations instead.

It is expected that the ban will also be brought in across Scotland and Wales as well as England., and in Northern Ireland in the future. Quick ad break now and then we’ll be right back.

To find out more about vaping, you can visit our website ineqe.com or one of our Safeguarding apps and search for ‘vaping’ to find our need-to-know guide on youth vaping.

[Ad break]

Tyla: And finally, it’s our Safeguarding Success Story of the Week! As of Wednesday, new offences have been introduced as we see the Online Safety Act coming into action. Cyberflashing and epilepsy-trolling are now criminalised. Cyberflashing is the sending of unwanted sexual images and epilepsy-trolling is sending flashing images electronically with the intention of harming people with epilepsy. Other offences from the act include sending death threats, sharing so-called ‘revenge porn’ and sending fakes news in order to cause harm.

Natalie: Great to see these new laws coming into effect, we will wait and see what impact they might have and hope they can begin our journey to a safer internet for all of us. And if you’re looking for a handy Online Safety Act explainer for young people, check out our most recent episode of The Online Safety Show, which you can find

That’s all from us – thank you for listening. Remember you can follow us on social media by searching for Ineqe Safeguarding Group.

Tyla: Until next time…

Both: Stay safe.

Join our Online Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online.

Sign Up

Pause, Think and Plan

Guidance on how to talk to the children in your care about online risks.
Image of a collection of Safer Schools Resources in relation to the Home Learning Hub

Visit the Home Learning Hub!

The Home Learning Hub is our free library of resources to support parents and carers who are taking the time to help their children be safer online.

2024-02-02T12:12:29+00:00
Go to Top