Inspired by Neuroscience, we can start to answer the question: What does music look like?
I know that I am not the only one that, when listening to music, likes to imagine shapes and colors as well as growing and ever-evolving visual patterns. Up until recently, I did not know that music visualization¹ ² was a common thing in creative coding projects. As an attempt to reconcile my neuroscientific and academic background with my current data processing environment, I created an audio and music visualizer that uses the information within audio streams to create a neural forest, all in real time. Here is how it looks after a fragment of “Claire de Lune” from Debussy:
NOTE: I am not going do describe detailed mathematics here. If you are interested in knowing more about the algorithm and how it works feel free to contact me! Source code can be found at the end of the text.
The sound Neuron forest in detail
Looking at an original drawing by 1906 Nobel Prize winner Santiago Ramón y Cajal, we can see that a pyramidal neuron looks like this:
The soma in the center (the body of the neuron) has an axon (the wider branch) and a dendritic tree (rest of the smaller branches)³. Through a process called neurogenesis⁴, these specialized cells grow from simple immature somas to intricate mature neurons⁵ like the one we see here, with branches randomly spreading out of the middle soma. If we could simulate this growing then, it seems that a good idea to do so is by means of random walkers⁶, wherein very general terms and without going into mathematical details, the position p of every growing branch b at the step t+1 is randomly chosen. True random walks will generate very intricate dendritic webs (there is no directionality gradient) so it would be wise to change direction based on a probabilistic approach, where all new positions in branching patterns are only changed if that given threshold is passed. Very importantly, we can see from Cajal’s original drawing that the farther away a dendrite or axon goes from the central soma, the thinner it gets and the more intricate branching patterns emerge. We can also replicate this again with a probabilistic take, where after a certain axon/dendrite width is passed, we generate a new branch (or walker in mathematical terms). If you are interested into the mathematics of random walks, which are applied in the code, there are many sources that go into more details⁶ ⁷.
To achieve all this, I used the Java-based programming language Processing. Processing has some very powerful tools and functions that allow it to integrate graphical and audio streams into one single pipeline, making the job easier.
Luckily for me, the Processing community is very active and through openprocessing.org I found a sketch code simulating a growing tree that suited my needs. The original random branching algorithm, to which I adapted and added sound input (and more) to create this project, is taken from here.
After some coding (a link to it at the end of the text), I managed to create simulations that imitate neuronal branching. Compare a drawing of a neuron using the code with what Cajal drew:
Similar, don’t you think? Interestingly, given that the regularity is provided only at the initial branching parameters (like size, reach and transparency) while branching follows a probabilistic random growth, each time you run the drawing you will get a completely different neuron:
With this, we have the base of the drawing figured, now let’s bring this neurons alive with sound and color!
Remember when I said that the drawing parameters are fixed? What about changing these parameters based on an input? Even further, what about making this input audio, so depending on the input, the growing pattern and velocity changes? This is exactly what I did next.
The easiest approach with sound would be to work with a real-time amplitude (or intensity) stream of the signal as a growing parameter. In very simple pseudo-code this would look like this:
sound_neurons(neuron_width,neuron_reach,amplitud_stream){ drawing_commands;}
Where amplitud_stream
is the only parameter that is updated via real-time audio streaming (see source code for more information). First, let’s change the diameter and transparency of the soma and dendritic branches based on this stream at the moment of running the simulation. The louder the input, the larger the neuron will be. Also, let’s track the drawing so we can see how the neuron branches in real time.
Now things start to look very interesting! With this tweak, we are beginning to have sound-reactive neurons. With a quiet input (which was me literally whispering into my microphone) I got a small neuron. After a very loud input, the next drawing is much bigger! This is why I decided to call these drawings sound_Neurons.
Another thing we can try is to manipulate the drawing speed, again with the streamed amplitude. Look again at the cover image of this story and you might see that the growing speed in each sound_Neuron follows a bum bum….. bum bum…. bum bum… rhythm.
I then decided to randomly change the colors (in HSV space⁸ in the video example) of the neurons each time they appear so the final picture is more colorful:
Finally, Processing is able to stream and read audio files, so the final and logical thing for me to do was integrate these colored sound_Neurons with music! For this, I decided to use “Claire de Lune,” the beautiful piano suite from Claude Debussy, as the driver for a neural forest. By driver I mean that the intensity and changes in music rhythm will dictate how this forest grows. The final result is this image:
Generated in real-time in the video at the beginning of the text while using a dark background so neurons really fill the space with sound. Even after many runs, I still found it mesmerizing to watch. I can run the simulation many times, over and over again.This side project of mine turned out to be much more than what I anticipated in the beginning. Listening to music through these neural forests helped me bring to life the shapes and colors I talked about imagining in a rich and rather personal way. I like to think of this as my take on trying to mix a little bit of science with art and signal processing. Like so, this project is a reflection of who I am. I did my graduate studies in Neural Networks and as someone who loves music and visual arts, this was a mean of expressing my passion for both sides.
I hope you enjoyed reading and watching these animations too. I certainly hope to continue exploring this beautiful field of music visualization in the future and with luck, again with sound_Neurons.
NOTE: What song would you like to see colored by neurons? Let us know in the comments and we will randomly select one or two and generate a new sound_Neuron forest.
Thank you for reading!
References:
[1] https://en.wikipedia.org/wiki/Music_visualization
[2] https://medium.com/nightingale/data-visualization-in-music-11fcd702c893
[3] https://qbi.uq.edu.au/brain/brain-anatomy/what-neuron
[4] https://qbi.uq.edu.au/brain-basics/brain-physiology/what-neurogenesis
[5] Kempermann, G., & Overall, R. W. The small world of adult hippocampal neurogenesis. 2018. Frontiers in neuroscience, 12, 641.
[6]https://www.mit.edu/~kardar/teaching/projects/chemotaxis(AndreaSchmidt)/random.htm
[7] ttps://medium.com/@ensembledme/random-walks-with-python-8420981bc4bc
[8] https://en.wikipedia.org/wiki/HSL_and_HSV
Machine Learning Specialist settled in Barcelona | PhD in Computer Science | Make Complex Look Simple.