machine-learning-01

Q&A with Data Scientist, Jay Buckingham

For a large enterprise brand, the media intelligence universe is capable of producing a torrent of data associated with traditional and digital media. Within this data treasure trove, Zignal has the ability to surface stories or issues relevant to a company’s brand, along with the key influencers. Since 2011, Zignal has worked with PR, communications and digital marketing professionals to transform this media intelligence data into a strategic asset supporting day-to-day and strategic decision making.

Through advances in data science, Zignal can now take this same data set and apply machine learning to provide context within seconds. I  recently had the opportunity to catch up with Jay Buckingham, Zignal’s Senior Data Research Scientist. In the past he has worked at Microsoft, Klout, and most recently Apple, focusing on applying machine learning research to challenging real-world pattern recognition problems, particularly in the realm of language. I sat down with him to find out how he and his team are applying their knowledge in machine learning to the Zignal Labs media intelligence platform.

Providing Context in Seconds

Tom: Jay, what are the benefits of combining media intelligence and machine learning?

Jay: Zignal’s platform puts the right data points in the hands of our customers — yet this is still a large amount of information to sift through. Our goal is to provide context in seconds. Our efforts focus on summarizing the context of these potentially thousands of stories in a couple of sentences.

For example, if our client was the SF Giants and an alert generated with 1000 stories, I want to make it immediately obvious that the focus is around their star pitcher and his ongoing contract negotiation. We can do this as the majority of articles around the same topic use a particular cluster of keywords.

Tom: Can you dive  a bit more under the covers and explain how you make this happen?

Jay: Sure. Our data science team uses Natural Language Processing (NLP) tools like NLTK and Stanford CoreNLP to parse the sentences in a story into nouns, verbs,etc.  Given a set of stories that we know are about the same topic, we then apply machine learning algorithms to train models that can detect more stories “like these”. These machine learning algorithms do much more than just look for stories that share words and phrases — they build models that captures a much richer set of the deep statistical regularities that the stories about one topic share. Using these trained models we can do things like check whether a new story is “about” the same topic. We can also offer a sample story that is reflective of the whole set to provide additional context. This is huge as customers no longer have to sift through tens of articles to understand what is being said about their brand.

The Sentiment in Every Message

Tom: For a typical enterprise or large brand, how can they leverage context?

Jay: Another aspect of providing context is showing the sentiment behind a message. If I were to say “I hate my Canon camera, I wish I’d never bought it, but your Nikon camera seems incredible,” a human would be able to distinguish that I have positive feelings towards Nikon, while my opinion towards Canon is less than favorable. For computers to do this, they need to identify each entity in the sentence in order to understand the correct sentiment; it needs to know if a word is a noun, verb, adjective, preposition or conjunction and how each word relates to each other. Again, we use NLP tools for entity recognition. The team at IBM Watson have created an excellent sentiment analysis tool that would require a team of over 20 data scientists for us to attempt to recreate.

We harness the power of these sentiment analysis and entity extraction tools and use them as building blocks for what we do. We take the sentiment analysis provided by these tools and use it to provide further context on how people are talking about a customer’s brand. The first stage of this is providing basic sentiment in regards to the key topics in a group of articles: lots of people are unhappy with the new cameras from Canon while Nikon owners are happy. The next step is to broaden the types of emotions we measure, like surprise and anger, to give even greater context into the content around a brand.

The Future

Tom: Where is machine learning heading — especially as it relates to media intelligence?

Jay: What we have observed around the field of data science is the success teams have found from using neural networks. In particular, neural networks with many layers of simulated neurons called Deep Learning. Almost all of the recent successes in machine learning on very difficult tasks have used Deep Learning techniques.  One of the downfalls with this machine learning technique is it takes a few million data points to train the model; luckily, we have this level of data at our fingertips at Zignal and this was one of the things that interested me in this company. Eventually, Deep Learning will enable us to make dramatically more nuanced statements regarding the content of an article. Another advantage to using this technique is it will allow us to offer recommendations to customers based on the actions of others. It will incorporate how brands faced with similar circumstances reacted in the past, what their response was and tailor the action based on the keywords and influencers involved in their situation.

If you would like to learn more about the Zignal Labs engineering team, visit our #INTHELABS page