We describe the concepts behind a web-based minimal-UI DJ system that adapts to the user's preference via simple interactive decisions and feedback on taste. Starting from a preset decision tree modeled on common DJ practice, the system can gradually learn a more customised and user-specific tree. At the core of the system are structural representations of the musical content based on semantic audio technologies and inferred from features extracted from the audio directly in the browser. These representations are gradually combined into a representation of the mix which could then be saved and shared with other users. We show how different types of transitions can be modeled using simple musical constraints. Potential applications of the system include crowd-sourced data collection, both on temporally aligned playlisting and musical preference.
Paper as PDF