Links

Tools

Export citation

Search in Google Scholar

Generating music with resting-state fMRI data

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Resting-state fMRI (rsfMRI) data generates time courses with unpredictable hills and valleys. People with musical training may notice that, to some degree, it resemble the notes of a musical scale. Taking advantage of these similarities, and using only rsfMRI data as input, we use basic rules of music theory to transform the data into musical form. Our project is implemented in Python using the midiutil library. We used open rsfMRI from the ABIDE dataset preprocessed by the Preprocessed Connectomes Project. We randomly chose 10 individual datasets preprocessed using C-PAC pipeline with 4 different strategies. To reduce the data dimensionality, we used the CC200 atlas to downsample voxels to 200 regions-of-interest. A framework for generating music from fMRI data, based on music theory, was developed and implemented as a Python tool yielding several audio files. When listening to the results, we noticed that music differed across individual datasets. However, music generated by the same individual (4 preprocessing strategies) remained similar. Our results sound different from music obtained in a similar study using EEG and fMRI data.