Inspiration

I listen to quite a bit of classical music and movie OSTs and wanted to write my own, unfortunately, I'm quite horrendous at comprehending sheet music, so I decided to try another route. I thought, what if we could generate music using a recurrent neural network. I then, using online resources and APIs as well as my own knowlrdge

What it does

It uses an LSTM based recurrent network to generate a sequential output of notes and chords that is converted to a piano music file. As of now the LSTM and the Unity rig are not connected as it is somewhat difficult to connect python and unity.

How I built it

I used the Keras API for the model architecture and most of the building is covered in the model walkthrough at the end of the submission. The Unity Game engine was also used for the user interface.

Challenges I ran into

Training the project was extremely painful and it actually took a full day of training to reach the somewhat discordant level produced. The Unity integration was cut short due to the fact that I had to go to school on the 10th. Linking the two was and is near impossible for a HTML integration

Accomplishments that I'm proud of

The fact that it actually works and produces music! Also the unity visualiser was quite well done for the short time I had. the music is quite decent and actually has flow and is not disjointed.

What I learned

Keras API, Music21 API, LSTM Theory, MIDI File theory, Unity lineRenderer

What's next for superMusicAi

I want to generate a dance music LSTM to actually work as a DJ for an EDM music event. I want to actually create a working UI and Prediction linked prototype, so that it can generate either classical or another form of music on the go. If possible I would like to try and match colours to the music generated in the UI in order to set a mood.

Built With

Share this project:

Updates