Creating your own musical pieces is one of the most wonderful ways of enjoying music. However, many lack the basic musical skills to do so. In this paper, we seek to explore how machine learning algorithms can enable musically untrained users to create their own music.
To achieve this, we propose a Neural Hidden Markov Model (NHMM), which is a hybrid of a Hidden Markov Process and convolution Neural Network algorithm with LSTM. This model takes users original musical ideas in an easy intuitive way, automatically modifies the input and generates musically appropriate melodies as output. We further extend the model to allow users to specify magnitude of revision, duration of music segment to be revised, choice of music genres, popularity of songs, and co-creation of songs in social settings. These extensions enhance user understanding of music knowledge, enrich their experience of self-music learning, and enable social aspects of music making. The model is trained using Columbia’s publicly available Million Songs Dataset. We also conduct experiments on melody generation.
We also design a mobile application with intuitive, interactive, and graphical user interface which is suitable to the elderly and young children. Different from most existing literature focusing on computer music composing itself, our research and application aim at using computers to aid human composition and enriching music education of musically untrained people.
Keywords: music technology, computer-aided music composing, machine learning (ML), Hidden Markov Model (HMM), Recurrent Neural Network (RNN), LSTM, Convolutional Neural Network, human-computer interaction
I am senior at Horace Mann School. I’ve been actively involved with Concerts in Motion since middle school, spending Sunday afternoons singing with seniors in nursing homes. I’ve also participated in seasonal events at the Turtle Bay Music School where we raised a music education fund for children from disadvantaged families. The friendships I’ve developed during these events have helped me to understand just how much music can mean to someone. Instead of just listening to someone singing once a week, everyone should be able to create their own music. Thus, I had the idea of combining my love for singing and recent technological advancements to help others compose their own pieces. I developed this research under the guidance of Professor David Gu.
We are among the first to apply a hybrid of HMM and convolutional neural network with LSTM to the music composing. The hybrid approach is further extended to allow users to specify magnitude of revision, duration of music segment to be revised, choice of genres, popularity of songs, and co-creation of songs in social settings. The mobile user interface we designed are intuitive, interactive, and flexible, suitable to the elderly and young children.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org