This project uses an implementation of the Neural Style paper in conjunction with Python signal processing libraries and Pixi.js to create a music visualizer. When a song's bass tones are dominating, the background will show one style of some ambient background, and when the treble tones are dominating, the background will show a different style for the same background. "In-between" tones will present a blend of the two styles. Additionally, particles will appear for strong bass (blue) and treble (red) noises. These particles are initialized with a random speed and direction but they follow a vector field formed by the gradient of the background image.
This is not a real-time system, so if you want to add more backgrounds or music, you will have to generate images or data files using Python scripts.
If you create new content, you will need to modify the global variables at the top of musicvis.js so that you will be able to select them on this page. To run this system, you need to have a server with PHP serving this directory. XAMPP is a good option for this task. If PHP is not available, a separate JS file (readclient.js) can be used to bypass the need for webservice.php. The sliders should work as the song is playing; however, to choose a new song or background, wait until the current song is over or refresh the page. If the renderer blacks out after multiple plays, or the audio and video are desynced, please refresh the page (there are a bunch of small problems that prevent the system from being robust; this prototype is intended to show the general idea of the project).
I do not plan to develop this project further, but if I did I would consider the following: