Categories

Parametric Composition

Parametric composition is the art of composing music by means of an algorithm whose output can be more or less continuously varied by adjusting a few numerical parameters. If changes in the pieces depend continuously upon changes in the parameters, then related pieces must lie next to each other in the parameter space. Such a parameter space is intelligible. It can be explored interactively, for example by zooming in and out, to discover certain classes of pieces, and more interesting pieces in each class. The parameter space becomes a sort of Mandelbrot set, and each parameter point in the space represents a piece, which is a sort of Julia set.

So, one would like there to be an algorithm that can generate any possible piece of music, and one would like to be able to compose by interactively exploring this space. I have informally proved that such an algorithm and such a parameter space exists. The proof goes like this. Any piece of music can be represented by a set of coefficients for the duplex Gabor transform over a subset of time and frequency. These coefficients form a finite 2-dimensional array of real numbers. Any such array can be approximated as closely as desired by computing the measure on a complex-valued iterated function system (IFS). From Michael Barnsley’s Collage Theorem, it follows that the measure changes continuously with changes in the coefficients of the IFS. Finally, although the IFS parameters can vary in number, any subset of IFS parameters can be mapped onto the unit square by identifying the upper corner of the square with the largest IFS in the subset, and the lower corner with the smallest IFS in the subset. Linear interpolation then suffices to identify each point in the unit square with a unique set of IFS coefficients, and each set of IFS coefficients in the subset can be interpolated from some point in the unit square. Thus, the unit square becomes a parameter map for generating any piece of music in the specified subset of time and frequency and the specified subset of IFS parameters. By making the subset of IFS parameters sufficiently large — perhaps a few hundred or a few thousand coefficients — the procedure becomes musically fruitful. Probably most pieces of music of moderate duration can be approximated by at most a few tens of thousands coefficients. After all, Barnsley showed that most digital images could be approximated by at most a few thousand IFS coefficients, and a digitized piece of music contains no more than about ten times as much data as a digitized image.

What I have not done is to create a musically useful implementation of this algorithm. It is too compute-intensive. On current personal computers, computing a single IFS and rendering the generated duplex Gabor transform (I have performed the experiment) takes several minutes at least. To be musically useful, the algorithm would have to be able to generate thousands of these points in less than a minute, and even then it would take hours of exploration to discover interesting pieces. It might be possible to send each parameter point out to a di fferent computer on the World Wide Web, using a system like that of [email protected], so this brute-force approach should still be investigated. After all, the algorithm lends itself to computing each IFS, or indeed each coefficient in any one Gabor transform, in parallel.

However, on a single computer, computing so many sounds in this way is much too slow and takes much too much memory. Of course, instead of using Gabor transforms to represent the complete sound of each piece, it might also work to represent only the score for each piece. Scores contains millions of times less data than sounds. However, then the representation of the scores becomes an issue — there are many, many ways of representing scores. Ideally, one would like to be able to specify an orchestra of N voices over some subset of time, frequency, and loudness, and then be able to generate any subset of points in this set. Even better, one would like the representation to be close to how composers hear and imagine music. We know that the Gabor transform (sonograms, more or less) or, even better, various wavelet transforms are indeed close to at least some basic aspects of how composers hear and imagine sounds. What concise, compressed representation of scores is similarly close to at least some basic aspects of how composers hear and imagine scores?

Comments are closed.