Music streaming services feature billions of playlists created by users, professional editors or algorithms. In this content overload scenario, it is crucial to characterise playlists, so that music can be effectively organised and accessed. Playlist titles and descriptions are proposed in natural language either manually by music editors and users or automatically from pre-defined templates. However, the former is time-consuming while the latter is limited by the vocabulary and covered music themes.

In this work, we propose PLAYNTELL, a data efficient multi-modal encoder-decoder model for automatic playlist captioning. Compared to existing music captioning algorithms, PLAYNTELL leverages also linguistic and musical knowledge to generate correct and thematic captions. We benchmark PLAYNTELL on a new editorial playlists dataset collected from two major music streaming services. PLAYNTELL yields 2x-3x higher BLEU@4 and CIDEr than state of the art captioning algorithms.

This paper has been accepted for publication in the proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022).