[Submitted on 26 Jan 2023]
Authors:Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghidour, Christian Frank
Abstract: We introduce MusicLM, a model generating high-fidelity music from text
descriptions such as “a calming violin melody backed by a distorted guitar
riff”. MusicLM casts the process of conditional music generation as a
hierarchical sequence-to-sequence modeling task, and it generates music at 24
kHz that remains consistent over several minutes. Our experiments show that
MusicLM outperforms previous systems both in audio quality and adherence to the
text description. Moreover, we demonstrate that MusicLM can be conditioned on
both text and a melody in that it can transform whistled and hummed melodies
according to the style described in a text caption. To support future research,
we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs,
with rich text descriptions provided by human experts.
Submission history
From: Timo Denk [view email]
[v1]
Thu, 26 Jan 2023 18:58:53 UTC (503 KB)
Leave A Comment