Zignal Labs was pleased to be invited to present at the 2018 re:Invent conference last month, hosted by Amazon Web Services. We shared our experiences using Amazon Mechanical Turk, a crowdsourcing marketplace, to revamp Zignal sentiment analysis that relies on continuous training of our machine learning platform. Here is a recap of our key insights:

The Difficulty of Sentiment Analysis

Sentiment analysis is an important feature of Zignal software. In the ‘reputation age,’ a company’s brand is defined by the billions of conversations and stories happening across different digital and social media platforms every day. As a company’s value is increasingly tied to to reputation, it is important for businesses to understand sentiment about brands, competitors, topics and a host of other areas throughout this dynamic media landscape.

Sentiment analysis helps companies synthesize massive quantities of data, counter the threat of fake news and build trust in their brands. For Zignal, providing reliable and accurate sentiment data is key to a successful user experience.

Making this happen posed a unique problem for our machine learning engineers, especially because sentiment is difficult to define in terms simple enough for a computer to work with. Third party sentiment analysis models scanned conversations for words categorized as positive, neutral, or negative to make decisions. But determining sentiment is a more nuanced art — there is a difference between messages that simply contain positive or negative words and those that constitute a positive or negative message about a brand.

This tweet expresses positive sentiments about John McCain, but uses negative words like “lost” and “mourn.”

In building a tool capable of natural language processing, applying the concept of “polarity for reputation” (the positive or negative nature of a message’s impact on a company’s brand) in a scalable and reliable manner is key. This requires human input to train our sentiment analysis tool on the machine learning platform, for which we leveraged our partnership with Amazon Web Services.

Mechanical Turk

Amazon Mechanical Turk is an online crowdsourcing marketplace for work that requires human intelligence, providing this resource at scale through an API. It addresses the gap between artificial and human intelligence — there are still areas like computer vision or data categorization that require human insight to be done right.


While people are able to easily tell the difference between a chihuahua and a blueberry muffin, a computer will have a much harder time doing so.

The crowdsourced nature of Mechanical Turk makes the process of applying human intelligence to tasks flexible, scalable and affordable.

The quality of output from machine learning tools is dependent on the quality and quantity of data put into them. Consequently, access to high-quality labeled data at scale is imperative for Zignal labs to deliver a sentiment solution that scales to our product and provides action-enabling real-time insights.

Enter Wisdom of the Crowd

Partnering with Amazon Mechanical Turk provided an upgrade to the machine learning workflow, allowing us to leverage hundreds of thousands of people in the cloud to build datasets in an ongoing fashion.


Source: https://www.mturk.com/

Data quality is ensured through statistical analysis on our workforce. At least five people will answer any given sentiment labeling question. As workers answer more questions, we develop an understanding of worker biases and quality, which we can then leverage to determine sentiment, understand ambiguities and articulate our confidence in the data classification.

Amazon Web Services helped us establish a data labelling pipeline that ensured a continuous influx of high-quality human input at scale for sentiment analysis. We were also able to provide clients with accuracy metrics of our sentiment analysis data for maximum transparency and reliability.

Benefits and Improvements

The resulting benefits of the Amazon partnership were significant: Zignal was able to cut development and operations costs by 90 percent, which freed up resources that we could then direct to tackle other, more complex problems like automated account detection or named entity recognition. In addition, we were able to improve the precision of our sentiment analysis by 30 percent using natural language processing.

This means our new sentiment analysis tool has a near-75 percent accuracy. Extensive internal testing revealed a satisfaction rate of 100 percent from our sample test group, and a reduction in sentiment label manual overrides of at least 60 percent and up to 90 percent.

The extensive degree of our success on various AWS platforms is why Amazon has published a study on our use case and invited us to this year’s re:Invent conference to share our insights form throughout the journey. You can access the case study here and our full presentation at the conference here.