• Felix Laumann

Hate Speech Data Comparison: NeuralSpace, AWS Comprehend, Google Vertex AI, and Hugging Face


Introduction

Hate speech, toxic language, or abusive comments on social media platforms have sharply increased over the last decade with ever-growing political discrepancies and populists fueling the trend even more. The Facebooks, Twitters, and TikToks of the world try their best to keep extreme opinions at bay but the sheer scale of their user bases makes this a very difficult endeavour. As for so many large-scale problems, machine learning-based automation shall deliver the answer by building Natural Language Understanding (NLU) models that scan every single comment on one of these social media platforms and quickly check whether the comment entails any form of hateful, toxic, or abusive phrases.



NeuralSpace Data Studio with Hate Speech Data in Arabic


In this comparison, the researchers at NeuralSpace collected various datasets that contain comments on social media platforms labelled as “toxic” or “non-toxic”. Given the capabilities of NeuralSpace to work natively in almost 60 languages, datasets in many different languages were chosen for this evaluation, namely Arabic, Danish, Italian, Polish and Portuguese. NeuralSpace’s AutoNLP-powered NLU technology was compared to the ones of AWS Comprehend, Google Vertex AI, and Hugging Face’s AutoNLP. AWS Comprehend’s results were achieved with its AutoML functionality, called Comprehend Custom, that leverages transfer learning to work well when data are scarce. The evaluation on Google was realized with its new AutoML tool, called Vertex AI, which is Google’s one-stop solution for any machine learning requirements. To achieve results for Hugging Face, its AutoNLP model was used with English as the base language when the languages of the datasets were not available.


Results

With much less market experience than its competitors and a brand-new solution, NeuralSpace’s AutoNLP achieved comparable results against all of these established providers across all tested languages. The table below shows the accuracy achieved by the respective companies.




NeuralSpace Platform

Below is the first publicly available view on the Data Studio of the NeuralSpace Platform where all of the training data are prepared. It supports phonetic typing, automatic dataset augmentation, and is here used to annotate a hate-speech dataset in Arabic with a pop-up entity tagger.






Filters for Named Entity Recognition (NER)








The results for NeuralSpace were achieved by using the NeuralSpace Platform that includes multiple NLP-specific apps such as Translation, Transliteration, Speaker Identification, Speech-to-Text, Text-to-Speech, and, what was used for these experiments, Language Understanding, besides many more yet to come. The philosophy behind it is that every business problem related to language, both in text and audio format, can be dismantled into sub-problems that are solved by one specific app on the NeuralSpace Platform. The apps are structured in such a way that they can be easily plugged together, one after the other, to build NLP pipelines. For example, a voice assistant like Amazon Alexa requires a pipeline consisting of Speech-to-Text, Language Understanding, and Text-to-Speech, potentially also including Speaker Identification or Transliteration.





NeuralSpace Dashboard with intent distribution in the top, version control in the left bottom, and accuracy in the right bottom. With the NeuralSpace Platform, these building blocks can be used in a drag and drop fashion on its online graphical user interface (GUI), via simple APIs or an intuitive CLI. Users do not need to understand any of the underlying complexities of the state-of-the-art deep learning models powering NeuralSpace’s apps and can simply click on the Train button for an AutoNLP-powered model training.


To know more about NeuralSpace and what we are currently developing, follow us on LinkedIn, Twitter, Medium and check out our website.

46 views0 comments

Recent Posts

See All