Yubo, the live social discovery platform allowing young people to expand their social circles all over the world, is now addressing the challenges of audio moderation in real-time. The Gen Z app has been efficiently focused on implementing real-time text moderation for a few years, not only preventing its users from sharing any harmful, racist or toxic messages in DMs as well as in profile bios and livestreams comments. In addition, Yubo has tackled visual moderation of its media and video, making sure the visual content shared within profiles and in Yubo’s Lives respect the app’s strict Community Guidelines. The latest goal for Yubo is to be able to moderate audio content to help combat verbal bullying, hate speech, self-harm and other threats, which still is one of the largest challenges in moderation today.
How does Yubo achieve audio moderation?
Moderating audio still remains one of the most difficult tasks for platforms to implement despite the constant technological advancements in safety tools and AI. Victims suffering from online harassment experience, as in real life, many attacks through audio on social media.
Yubo introduces this latest audio moderation technology in partnership with the cloud-based AI solutions provider, Hive. The social discovery platform achieved a first trial period through May 2022, focusing on the US. After seeing success with this trial, Yubo recently decided to expand this to all English-speaking markets. The main objective now is to continue improving the efficiency and accuracy of the tool regarding the detection of harmful speech or anything that violates the app’s Community Guidelines.
Yubo’s audio moderation is the very first of its kind to be introduced by a social media platform. While incredibly advanced, it works in a very simple way: The AI records and automatically analyzes 10-second audio snippets recorded in livestreams of at least 10 people or more. It transcribes the audio to text and flags any transcripts that require a closer look if detected as violating Yubo’s Community Guidelines. It scans for key words and phrases and is currently analyzing over 600 livestreams per day. If the algorithm detects harmful language, it is automatically flagged to one of Yubo’s human Safety Specialists who review it closely and take action if needed, which includes escalating to law enforcement when necessary.
Why does the social media world need audio moderation ?
Without any doubt, the biggest benefit of Yubo’s new audio moderation feature is greater safety and reliability when it comes to online interactions. Providing users with as safe an environment online as possible is Yubo’s primary goal, and it has been since the app first began. Adding another layer of security is crucial to protecting its users’ wellbeing and in continuing to build trust in the platform.
What’s great about Yubo’s audio moderation?
Firstly, Yubo’s audio moderation relies on machine learning, which guarantees a constant pro-active evolution and growth of the AI’s effectiveness over time. Machine learning is key to identify patterns and analyze parallels between similar looking audio content that would otherwise be difficult to detect.
It also allows Yubo to process a very large quantity of data and content, reserving only the instances that need further attention for the human Safety Specialists.
What kind of challenges does Yubo’s audio moderation face?
False positives is definitely one of them as the technology is still learning and improving. Detecting hate speech through key words means that sometimes the AI might pick up content that doesn’t need to be moderated like when users are playing songs in a livestreams for instance. It’s detected by the feature as being problematic, when it’s not actually the case.
Another example is when people use what might be considered harmful speech by the AI, when in specific context it is not actually harmful, such as joking with one another or using slang terms as a form of endearment. Since the feature isn’t able to analyze context in different situations, this can result in false positives. To deal with this, having the added layer of human real-time moderation by Safety Specialists, helps the AI improve through time. Reviewing the content is crucial to take appropriate actions, especially in the most concerning cases.
This challenge shows how complicated it is for platforms to tackle audio moderation, and it also indicates the importance of merging different tools together to guarantee nuances, which is exactly what Yubo has done. Enhancing the precision and reducing the number of those false positives is the next step Yubo is concentrating on.