I find it essential periodically to re-state my musical goal, so as to clarify just what that goal is and what steps I might follow to attain it.
My goal is to make good music, music that I want to hear and that other people will want to hear. And I want to do this by means of algorithmic composition and algorithmic synthesis, because not only is that what initially attracted me to composing, but also I have repeatedly found that while I can compose in other ways, the results are not impressive. I only seem able to make music that I actually want to hear by programming.
My work is pretty fundamental and really gets down in the basement about what is music, what is composing, and the foundations of music theory. The following sums up my research program. The following things are listed in decreasing order of logical priority, but they are roughly equally important from a practical point of view.
- I need algorithms that can generate any possible score. I have proved that this is possible and I have software that can do it, but it’s not that useful because most of the scores are not good, so…
- Those algorithms need to be parametrically mappable, therefore they must be controllable by purely numerical parameters. Having a map means one can compose by exploring the map. This is much faster, and more illuminating also, than writing a whole bunch of similar programs.
- The algorithms must operate on a basis that is musical, so that, when the parameter space is mapped, things the theory teacher would mark down/things the listener would have trouble hearing (these things are supposed to be the same, and pretty often are the same), can be avoided by sticking to certain regions of the map. This is what will make algorithmic composition actually usable for a wider variety of music: being able to explore compact and/or connected regions where the listening is not torture.
- It must be easy to generate listenable, glitch-free music that would be hard for composers without the algorithms to imagine. (Otherwise, why bother?) But I’m sure that if I can do the first points, this one is automatic; every method of algorithmic composition that I have tried so far has thrown out a fascinating piece here or there, it is just that usually they have a musical “clunker” or so in them, or there are too many uninteresting ones in the flow.
Currently, I do not have a toolkit that fulfills all these requirements. But I have managed to fulfill some of them in full, some in part; and others, I have ideas of how to fulfill.
The biggest missing piece right now is a space for representing music in which compositional operations are matrix operators (that I have implemented in CsoundAC’s Voicelead, VoiceleadingNode, and Score classes) and in which these operators are easy to create and control even when the number of musical voices in the space changes (I have ideas, but I don’t have a working model). The best I can come up with now is a space in which there are dozens or hundreds of potential voices, and ones or zeros on the diagonal of the operators turn those voices on and off. I haven’t done the experiments, yet, that will tell me if this is good enough. Right now, it feels kind of clunky, the matrices seem like they will be too big.
The other big missing piece is the linear treatment of time, i.e. making sure that chord progressions always follow each other in time and never stack up on top of each other, but I’m pretty confident that fractal interpolation based on iterated function systems of operators in voice space can do the job. Still, I need to do the experiments here, as well.