Computers now know so much about music, from the biographies of the artists down to the pitch of each note in a guitar solo. As the dominant means of listening moves to on-demand streaming on services such as Spotify, we’re now merging these machine learning and knowledge based approaches to music understanding with unprecedented amounts of user activity data to truly unlock the meaning of music taste and preference at a large scale.
Brian Whitman was the founder and CTO of The Echo Nest, which powers most online music recommenders and was acquired by Spotify in 2014. He will show us the various ways big data approaches have changed how people interact with music and some surprising insights and applications of machine learning and data mining on music content and activity.
Brian is recognized as a leading scientist in the area of music and text retrieval and natural language processing.
He received his doctorate from MIT’s Media Lab in 2005 in Barry Vercoe’s Machine Listening group and a masters degree in computer science from Columbia University’s Natural Language Processing Group. Brian’s research focuses on the cultural analysis of music through large scale data mining and machine learning.
Brian recorded and performed as Blitter until he co-founded The Echo Nest, a Spotify subsidiary, and he currently works on large scale automated music synthesis.