Aislyn Rose

Home

Interacting with Acoustics and Deep Learning

Published Aug 25, 2019

In light of my and my colleague’s PyCon DE and PyData Berlin talk this fall (October 2019), I began putting together Jupyter notebooks to help acoustics and deep learning become more tangible to people. (Similar notebooks are also available at this PySoundTool repo and on binder)

To my delight, notebooks.ai offers an amazing platform to do this using Jupyter Lab. I was able to load pre-trained models, sound datasets for feature extraction, and play manipulated sound files online, without anyone having to download or run anything on their own local machines. I strongly recommend setting up your own free account to fork and experiment with not only my notebooks but those of countless others.

Even if you don’t have an account, you can still read, see the visuals, and play sound files from the tutorials or notebooks I’ve put together so far. Keep in mind they are not there to explain everything in the field of digital signal processing or deep learning; they are there to offer a place for exploration of sound data (potentially your own recordings) and deep learning. As time goes on, I hope to add more and more content to enhance understanding but don’t fear a little researching of your own if I’m unclear. Note: I don’t tout myself as a person who explains things miraculously well… that doesn’t mean I will ever quit trying though.