27 August 2024

How AI is making digital platforms safer

During Netsafety week, Netsafe and NZTech hosted a webinar discussing online safety from a platforms perspective, featuring experts in the space from TikTok and Meta, who will also be part of the conversation at our first Wellington Dialogues Digital Safety Summit in October.

As technology evolves, it presents both opportunities and challenges for online safety. During the webinar, our speakers discussed how their organisations are adapting to ensure a safe online experience for everyone, how AI is being used to enhance online safety and security, and how online safety presents a market opportunity.

Watch the full webinar, or read the summary below, and if you’d like to be part of the continued dialogue on this topic, please join us at The Wellington Dialogues, our Digital Safety Summit on 31 October in Wellington.

Mia Garlick, Senior Regional Director Policy at Meta kicked things off by talking about how AI has been instrumental in creating safer online experiences by proactively detecting and removing harmful content before users see it. She acknowledged it comes with it challenges, particularly with the growing presence of generative AI and LLM’s, and talked about generative AI’s double-edged potential to both worsen online safety, and act as a valuable tool to improve the safety of online platforms. 

Mia explained that transparency and good governance is the key to combatting the negative potential of AI: Meta has implemented transparency tools, policies, and industry collaborations to ensure responsible AI use, including AI-generated content disclosure and watermarking.

She also talked about the suite of AI tools now available in Meta products, to help customers harness the technology: Meta’s new AI assistant and generative AI tools like Emu and Llama are advancing user experiences. Meta’s Llama models are open-source, allowing developers to adapt them, with positive impacts seen in cybersecurity and safety initiatives.

Next up, Jed Horner, Product Safety – Trust and Safety at Tiktok continued the discussion about the role of AI in creating safer online platforms. He explained that AI is essential in moderating content, improving online safety, and fostering a positive environment on TikTok.

TikTok uses AI to proactively detect and remove harmful content, focusing on safety and mental well-being. AI helps enforce TikTok’s community guidelines, ensuring a safe space for users. Jed emphasized the ongoing development of AI tools to address emerging challenges in online safety and content moderation.

Our final speaker, Nigel Hansen, is a SCA, SBOM & Threat Modeling Specialist / Global Security Team, and a member of the NZ Internet Task Force. He began by talking about the scope of Digital Safetyand how it covers data security, personal safety, and physical safety from cyber threats.

He explained the emerging focus in this space is about prioritising how vulnerabilities can cause real-world harm, beyond traditional cybersecurity.

Nigel believes that AI can be used to assess and improve both security and safety, especially in systems connected to the Internet. But harnessing this potential requires a mindset shift — moving from asking if systems are secure to ensuring products and services are safe. His call to action was to encourage attendees to participate in digital safety initiatives and conversations.

If you would like to take Nigel’s advice, we recommend you attend our first Digital Safety Summit: The Wellington Dialogues, when NZTech and Netsafe are bringing the ecosystem together across the public and private sector to discuss safer online servies, platforms and devices over a 1 day programme of keynotes, panels, and plenty of opportunities for discussion. 

Register now > 

 

By Courteney Peters

How AI is making digital platforms safer
26Aug

How AI is making digital platforms safer

During Netsafety week, Netsafe and NZTech hosted a webinar discussing online safety from a platforms…

Tākina Convention & Exhibition Centre