We're collecting more and more data every day, but how do we analyze that data to make good decisions about our users, products, customers? How do we make sense of overwhelming volumes of rapidly changing data and find the salient information in a timely fashion?
Data science has become an increasingly popular field - with many different algorithms that can be applied to data sets stored in columns and rows. Having our connected data in a graph database provides some unique opportunities as data scientists - running algorithms like PageRank, Betweenness Centrality, and community detection algorithms.
We've implemented many of these algorithms on top of Neo4j - some via exporting data to R and iGraph and others running directly inside Neo4j Java Stored Procedures as part of the APOC library. We'll discuss why we chose to implement them directly in Neo4j versus alternative architectures.
This session will briefly discuss the value of those algorithms, and then dive into how we used them to analyze the US Presidential Election based on Twitter data. You'll see a mixture of code, visualizations, and thoughtful analysis to better understand the conversations happening around the election.