Categories

The Absurd in Science and Religion

The absurd is something like this: there are spirits who drink cologne. Or, even though I am dead, I will be resurrected at the end of the world. Or, even though the wicked prosper, God is good, knows everything, and can do anything.

It seems clear that absurdity of this sort is an essential component of, or implication of, religion of any kind.

My question is this: is there any absurdity of this sort in science, that is, in the very foundations of the scientific world-view?

I’m not sure, but I see some profound indications that there is. Most notably, the Free Will Theorem of Conway and Kochen proves that if experimenters are free to set up any experiment, then the phenomena under observation have an irreducibly random component. This result is not consistent with the computability of Nature, because randomnness is not computable.

The absurdity of science is then the dilemma that either we must formulate theories as if we are free to create any experiment when in fact we are not, or else that we must formulate only computable theories even though in fact the cosmos is not computable.

Algorithmic Composition, Computational Irreducibility, and Parameter Maps

Algorithmic composition is art of composing music by writing and running algorithms. These algorithms usually are implemented as  computer programs, although there are notable examples of algorithmic scores computed entirely by hand, e.g. Leonid Hrabovsky’s Concerto Misterioso (many works of Cage could be considered to fall into this category as well).

An algorithm is computationally irreducible if, and only if, its output cannot be predicted in advance of actually running the algorithm, even by a close reading and analysis of its code. Only irreducible algorithms are of any intrinsic artistic interest, because reducible algorithms are essentially alternative forms of music notation — more convenient, perhaps, than neumes on ledger lines, but not fundamentally different. Thus, only irreducible algorithms enable the computer to bring anything really new to the capabilities of the human imagination. Irreducibility is a consequence of the undecidability of formal systems. It follows that irreducibility is a truly fundamental feature of logic and, indeed, because logic is embodied in computing devices, of Nature itself.

But how is a composer to gain control over the flood of output produced by irreducible algorithms? How are we to penetrate their obscuring veil of noise?

One possible approach is parametric composition. This is the art of composing music by interactively exploring a parameter space, each point of which represents a different input to the same computationally irreducible score generator. If the points are colored by some musically relevant feature of the generated scores, examination and exploration of the parameter map is far more informative than merely picking points at random and generating the corresponding scores. Note that such a map cannot make any algorithm computationally reducible — but it does present the outputs of an irreducible algorithm in organized, intelligible form. Of course, for each point in a map, the corresponding  score still has to be computed in advance.

I have elsewhere proved, using attractors of iterated function systems interpreted as scores, that there exists a parameter map for all music. However, this IFS map covers far too much ‘noise’ in comparison with ‘music.’ This is because (a) the map covers infinitely more infinite than finite attractors, and the infinite ones can only be interpreted as scores by discretization, and even more importantly, (b) the operators and attractors contain no specifically musical structure.

I have somewhat refined my approach to parametric composition into the following list of requirements for implementing it:

  1. The score generating algorithm must be irreducible, so that it amplifies the musical imagination and does not serve as a mere shorthand form of music notation. That is the musical motivation for the approach, and is therefore the most basic requirement. This requirement is settled.
  2. The score generating algorithm must compute each piece of music as a fixed point, so that the algorithm possesses mathematical integrity and is not subject to arbitrary musical interpretation. This requirement is settled.
  3. The fixed points must be finite or periodic attractors, so that each point in the attractor can be interpreted as a definite element of music such as a grain of sound, a note, or a chord. This requirement did not used to be sufficiently clear. Now that I have clarified this requirement, I may be able to figure out how to implement it. The most important open question is whether every finite piece of music has in fact a corresponding periodic attractor. What passes for my mathematical intuition suggests that it does.
  4. The space in which the attractors live must have some inherent musical structure, again so that the attractors are not subject to arbitrary musical interpretations. This requirement is sufficiently clear, but has not completely been implemented. I need to be able to represent either a rest, or a note, or a chord as a single point in time. I am confident this is possible but not completely confident that I can easily do it, or do it at all. Implementing this requirement would probably also mean that each operator in the algorithm would have some specifically musical meaning.
  5. The parameters for all fixed points must be transformed into 2 or 3 dimensional Hilbert indexes so that the parameter space for the algorithm, no matter how many independent numerical parameters there actually are, can be visually mapped, thus enabling parametric composition by exploring the map. This requirement is settled. Although I have not worked out the details, I am completely confident that it can be done.

I have recently been developing insight by examining the 3-dimensional voice-leading space of Tymozcko. The 1-voice chords are the edges of the prism that are parallel to its orthogonal axis. The 2-voice chords are the faces of the prism that connect the 1-voice edges. The 3-voice chords are the interior and caps of the prism. I think that the dimensionality of the prism can be increased (e.g. to 12) to obtain every arity of chord by translating to some subspace of that prism.

By also extending octave equivalence to range equivalence, bundles of prisms arise in which these same relations are extended across octaves. It then becomes possible to move up and down a scale, to create the P, L, R, and D transformations, etc., just by using different translations.

Arpeggiation (and playing a scale is a special case of arpeggiation) can be done by iteratively translating a chord such that it reduces to the single tone on each axis of the chord space. This can be considered a projection. It can be done by multiplying the chord times a unit vector that becomes in turn each element of the basis.

In short, the use of independent dimensions to represent translation, pitch-class set, voicing, arpeggiation, etc., is a non-starter, it is not even a coherent idea. All the required effects can be accomplished by projecting chords to various subspaces of chord space under range equivalence.

Although it is possible to use one real point for any chord whatsoever, it is not possible to have ‘memory’ of some chord or scale, for the purposes of arpeggiation or iteration, without multiplying by an additional group.

My best take on this now is chord space of the desired maximum arity under range equivalence, times the group of permuting the dimensionality.

The irreducible representations of the product group are what we seek

Passions, Achievements, and Catharsis

Recently, I have found a new passion in repairing my own house, not as elegant as writing computer software, nor as refined as writing music or producing beautiful photographs, but nonetheless interesting and exciting. You might think it a rather mindless passion, for the likes of the working class or the retirees looking to fill their time, but you would be sorely mistaken. It is quite challenging to do, properly that is, requiring thorough planning and preparation to ensure that the final ‘builds’ look professional. The right tools are necessary too.

The work itself is cathartic, allowing one to focus all their energies on a single task, and for a moment completing forgetting everything else, the worries, the stresses, the responsibilities. And the satisfaction in completing a project is amazing, lasting for days as the product of your labours manifests itself right in front of you, completely surrounding you.

Every great work begins with the right tools. I did not want to start out ‘small’ for fear that not having the right equipment would hinder my progress and challenge the depth of my interest in DIY before it could even get off the ground. I advise you, if you find yourself with similar aspirations, to make sure that you are sufficiently endowed to enable you to succeed – and by succeed, I mean, to keep something as your passion.

And so I began with gathering together the best tools for the jobs I intended to do. I quickly realized that a major decision had to be made early on when getting power tools. There are a couple of basic systems available that are not designed to overlap. Key to this is the power system that is used to run one’s tools can be either mains power-based where power tools are plugged into household electrical outlets, air-based where power tools are connected to an air compressor, or battery-based where tools are essentially untethered. Although a mix tools powered by different power systems is not a bad thing, savings can be made in settling with a single system. Based on raw power output, the range of use, and the future cost of expanding one’s armoury of power tools, I settled on an air-based power system with the central component being the air compressor. There are many different air compressors, each with its own pros and cons, but essentially, as it is the central component of the ‘power’ system, the rule of thumb is to spend as much as you can on afford on this component as the more you pay for the air compressor the greater its range of uses.

And so I went with the Makita MAC5200 for its large air flow capacity needed for a diverse range of air power tools. From here the decision of which tools to buy was decidedly easier. I am now the proud owner of a Chicago Pneumatic CP1014 air drill, a CP26 Series air screwdriver, a Hitachi NT65MA4 finish nailer, and a Husky 1/4 in. Angle Die Grinder, all of which have already seen action and increased my productivity beyond belief. Having seen what I can achieve with such power, the list of additional tools and accessories that I intend to stock my workshop shelves with continues to grow…

Composing on My Phone; or, Silence Reborn as Silencio

As mentioned in my previous post, I am developing a system for doing algorithmic music composition, also known as score generation or generative music, on Android smartphones (it also runs on ordinary personal computers).

I’ve named the system Silencio, because it is loosely based on concepts from my earlier, Java Silence package (see: Michael Gogins, ”Music Graphs for Algorithmic Composition and Synthesis with an Extensible Implementation in Java,” Proceedings of the International Computer Music Conference, September 1998). Note that most of Silence has found its way into CsoundAC, the Csound Algorithmic Composition system.

Silencio, in a very much alpha version, is now actually working. You can find it on Google Code and if you want to run it on a personal computer, check it out using Subversion (or just download the few files) and run the SilencioTest.lua script. If you want to run it on an Android phone, install the Android Scripting Environment on your phone, then mount your phone as a USB drive on your computer and copy the files from your computer to the scripts directory on your phone. I will make a regular Android package at some point.

Silencio is written in pure Lua and runs on Android smartphones that have the Android Scripting Environment installed, as well as on regular personal computers that have Lua or LuaJIT installed. I highly recommend LuaJIT for Intel-based personal computers, it enables Lua code to run only 3 times slower than C++, which is amazing for a dynamic language.

The current feature set of Silencio is short:

  • Event objects represent musical notes, grains of sound, or elements of a control gesture.
  • Score objects contain sets of Event objects and represent musical scores on a low-level, note-based, absolute-time form.
  • Scores serve as containers for generated scores and have methods for:
    • Appending new Events.
    • Saving the score as a Csound score file.
    • Saving the score as a standard Format 0 MIDI sequence file (many thanks to Peter J. Billam, composer and programmer, for his excellent MIDI Lua package for the ease of doing this!), with your choice of MIDI program to channel assignments.
    • Automatic playback, on the phone, of the generated MIDI sequences.
    • Embed a Csound orchestra in the Score object, which can be used to render that score if the platform supports Csound.

I will be adding new features soon…

Review of “Oryx and Crake” and “Year of the Flood,” by Margaret Atwood

Atwood proves herself a master science fiction writer in these books, the first two of a trilogy (the third has not yet been published). Spoiler alert: this review begins with plot summaries!

Oryx and Crake tells the story, in reverse via flashbacks, of how Crake, a genius with a grudge against the world, designs both a plague to wipe out humanity and a new species, the “Crakers,” to replace humanity, all the while being employed by a Corporation to provide gene goodies for a toxic Corporation-ruled future. Crake employs or co-opts underground genetic partisans, “God’s Gardeners,” to help with his work. The story is told from the point of view of Crake’s less gifted friend Jimmy. Both men are in love with Oryx, a smart girl from a backwater country. The story proceeds through the death of almost everyone at Crake’s hand. As the plague rages, Crake cuts Oryx’s throat and is in turn shot by Jimmy, who survives because he has been inoculated by Crake in order to tutor his Crakers. At the end of the novel, Jimmy is left watching over the Crakers and trying to keep from being killed by rogue genetic pets such as wolvogs and pigoons. At the very end he chances on the camp of three more human survivors…

The Year of the Flood recounts the same events from different points of view. The God’s Gardeners group, which attempts to contest the Corporate future, has advance warning of the plague and some of its members survive, including a young woman, Toby. Another woman, Ren, survives because she has been held captive in a sex club where she has been working; she has also been Jimmy’s girlfriend. Toby, Ren, and some other survivors make their way out of the city, where they meet Jimmy…

In addition to being a more competent fiction writer than most science fiction writers, which is no surprise given her publishing record, sales, and critical reputation, Atwood also turns out to be a better science fiction writer than most science fiction writers. In this book, she cuts to the heart of a dilemma currently troubling the human race. We have gained a Godlike power over biology, including our own biology. But we are not living up to this responsibility, for which nothing has prepared us. The main characters in the books – Crake, Jimmy, the God’s Gardeners partisans, the Corporations, and even Oryx – epitomize and live through the twists and turns of this dilemma, which are not simple.

In the books, our failure to live up to our responsibilities may well lead to our extinction (at least, that is very much where things appear to be heading at the end of the second book). Crake, a genuine genius, takes all responsibility upon his sole self: exterminate Homo sapiens sapiens with a designer plague and replace us with a successor species custom-designed for Eden.

At the end of the second book, this extermination appears very close to success, and the Crakers, as Jimmy calls the new species designed by Crake and his co-opted biological partisans, are left on the stage. Along with, however, the pigoons and the wolvogs, creatures not created by Crake and that, in biological terms, would prey upon or compete with the Crakers. The pigoons are co-operatively social and intelligent (though apparently not linguistic or tool-using), they are a chimera with nervous systems partly derived from us. They are certainly more dangerous than any non-human creatures currently on this planet, though they would not be a serious threat to an intact human society. I do think they might well pose an existential danger to the Crakers, possibly this is Atwood’s intention. There are also a few human survivors, God’s Gardeners partisans forewarned to isolate themselves from the plague.

In reality, the only extermination program likely to completely succeed against us would be an impact from a very large and fast-moving extraterrestial body, something big enough to boil the whole surface of our planet. But that is not Atwood’s scenario. The Year of the Flood leaves the pigoons, the Crakers, and a few humans together in a big fat mess. In reality, if the humans pull through, enough human weapons and other hardware would remain to leave the humans firmly in control.

What would happen then would be another novel. Or perhaps Atwood’s third book.  Quite possibly Atwood will find a solution I fail to imagine. We will see.

Whatever the intent, or the weaknesses, of Atwood’s plot, our evolutionary dilemma is terribly real, and it lends Atwood’s tale a terrific power. The human race will not, cannot, remain in its present form for more than another generation or so. We cannot and will not remain as we are. In reality, it is not likely that we can be destroyed, even if we, or some Crake, try to kill us off. Therefore we will change, or we will be changed. Atwood’s novel very definitely lives in this change, and Atwood dreams its motives, fears, loves, and reactions.

There are multiple pressures for large-scale change. In the first place is our palpable overpopulation, which is causing a great wave of extinctions across the planet. These extinctions are irrevocably, possibly catastrophically, altering the ecosystem of our planet. In the second place is urbanization, also irrevocable, and which is changing every selection pressure that is acting upon the human race. Global warming is but one facet of this urbanization. Atwood is very clear on other dire features such as mass consumption, social stratification, and out-of-control Corporations. In the third place, the place of honor, is germ line genetic engineering. Currently, the human race uses germ line genetic engineering on a very large scale for agriculture. At this time, somatic genetic engineering is  being practiced upon a few human beings for medical purposes, with mixed results. Such research is inherently difficult and unpredictable, but the very existence of us and our fantastic capabilities compared with other creatures is a striking sign of what may be possible. Human researchers and institutions have achieved very difficult goals, such as safe jet travel and nuclear power, and I personally do not doubt that we will master genetic engineering.

So, in the next generation or so, it will almost certainly become feasible to do germ-line genetic engineering, with stable results, upon human beings. That means future human beings, or at least some classes of them, may live longer, be smarter in at least some ways, and be profoundly different in other ways. In all likelihood, such genetic engineering will result in multiple species of human beings. I believe the most profound result of genetic engineering on us will be changes in human social biology. This indeed is, in Atwood’s novels, the major agenda of Crake’s engineering: get rid of pair bonding, meat eating, male competitiveness, and other striking features of human society. It’s not clear where Atwood’s preferences go – perhaps simply to telling a good story, or to making us think. But of course other changes would be equally feasible. Ant-like changes. Perhaps a warrior caste, an intellectual caste, and so on. Perhaps more disposition to conform. Perhaps less.

Ultimately, what is 100% certain is that, since not all the effects of these changes can be foreseen by us, they will subjected to the same old judge that has decided every case of biological “justice” from the very beginning: natural selection, or an inscrutable God.

Change on an unprecedented scale can’t be stopped, so the right questions are: what is the right thing for us to do? What is the right kind of germ-line genetic engineering for human beings? Who shall our descendants be?

Crake’s post-humans are one proposal, but I am not at all sure it is Atwood’s. To me the Crakers seem painted in very satirical colors, though I could well be wrong about that….   A clue may lie in the recordings of the God’s Gardeners hymns to extinct creatures that are featured on Atwood’s blog.

Dear reader, please understand that I doubt I will share Atwood’s values and goals as perhaps to be revealed more fully in the third book. I definitely favor the germ line genetic engineering of our descendants, but I think I would probably aim for different targets. Atwood seems to be focused on changes to very basic human instincts, I would be more likely to focus on decreasing the incidence of mental illness, reworking our ethnic and national chauvinism on a higher level than basic sexuality, and in other ways attempting to widen the authority of reason without huge unpredictable side effects.  However, I think Atwood’s novels are excellent science fiction, and that they grapple with fundamental issues in an admirably thorough and compelling way.

I am eagerly awaiting the sequel!

Goal Refinement Department

I find it essential periodically to re-state my musical goal, so as to clarify just what that goal is and what steps I might follow to attain it.

My goal is to make good music, music that I want to hear and that other people will want to hear. And I want to do this by means of algorithmic composition and algorithmic synthesis, because not only is that what initially attracted me to composing, but also I have repeatedly found that while I can compose in other ways, the results are not impressive. I only seem able to make music that I actually want to hear by programming.

My work is pretty fundamental and really gets down in the basement about what is music, what is composing, and the foundations of music theory. The following sums up my research program. The following things are listed in decreasing order of logical priority, but they are roughly equally important from a practical point of view.

  • I need algorithms that can generate any possible score. I have proved that this is possible and I have software that can do it, but it’s not that useful because most of the scores are not good, so…
  • Those algorithms need to be parametrically mappable, therefore they must be controllable by purely numerical parameters. Having a map means one can compose by exploring the map. This is much faster, and more illuminating also, than writing a whole bunch of similar programs.
  • The algorithms must operate on a basis that is musical, so that, when the parameter space is mapped, things the theory teacher would mark down/things the listener would have trouble hearing (these things are supposed to be the same, and pretty often are the same), can be avoided by sticking to certain regions of the map. This is what will make algorithmic composition actually usable for a wider variety of music: being able to explore compact and/or connected regions where the listening is not torture.
  • It must be easy to generate listenable, glitch-free music that would be hard for composers without the algorithms to imagine. (Otherwise, why bother?) But I’m sure that if I can do the first points, this one is automatic; every method of algorithmic composition that I have tried so far has thrown out a fascinating piece here or there, it is just that usually they have a musical “clunker” or so in them, or there are too many uninteresting ones in the flow.

Currently, I do not have a toolkit that fulfills all these requirements. But I have managed to fulfill some of them in full, some in part; and others, I have ideas of how to fulfill.

The biggest missing piece right now is a space for representing music in which compositional operations are matrix operators (that I have implemented in CsoundAC’s Voicelead, VoiceleadingNode, and Score classes) and in which these operators are easy to create and control even when the number of musical voices in the space changes (I have ideas, but I don’t have a working model). The best I can come up with now is a space in which there are dozens or hundreds of potential voices, and ones or zeros on the diagonal of the operators turn those voices on and off. I haven’t done the experiments, yet, that will tell me if this is good enough. Right now, it feels kind of clunky, the matrices seem like they will be too big.

The other big missing piece is the linear treatment of time, i.e. making sure that chord progressions always follow each other in time and never stack up on top of each other, but I’m pretty confident that fractal interpolation based on iterated function systems of operators in voice space can do the job. Still, I need to do the experiments here, as well.

How to Program

This advice assumes you already are a programmer – but that you want to be a much better one! It reflects my own personal methods, and may not work for everyone. Still, I hope you find it helpful. I know that I myself am a much better programmer now than I was when I started, and this is what I have learned.

My advice reflects decades of experience in multiple languages, multiple problem domains, and multiple applications that are used all over the world. I have written this in the order of the steps that you would take in developing a new software project; but I have also prioritized the importance of the critical steps.

In general, I am trying to impart a scientific approach to programming.

Before anything else, start a written log for your project. The purpose of the log is to clarify your thinking about the project, to record test results, and to save you from having to solve the same problem more than once. I prefer LaTeX for maintaining such logs, because LaTeX turns out to be easier to use, and to produce more beautiful results, than word processors. But using a spreadsheet or a word processor would also work. The log should be used on conjunction with comments that go into your source code control system whenever you check in a revision to a file (see below). In a sense, the log and the checkin comments are part of the same system. The present paragraph is the second most important paragraph in this advice.

At the beginning of the project log, write down as clearly and concisely as possible what the new software should do. This is the “Program Specification” (also called “Design Specification”) for your project. At first, you will not be able to be precise enough or complete enough. But be as clear and concise as you can be. As the project proceeds, you will clarify and re-clarify the “Program Specification.” It will become the Introduction to the “User’s Guide” for your software.

Use only general-purpose, high-level, widely used programming languages. These are (roughly in order of actual usage for critical software such as operating systems, or high-performance applications in warcraft, automobiles, medicine, finance): first C++ and C; then Java, C#, and Python; possibly FORTRAN, possibly Lua. Use only one language if at all possible; but in some cases, you will need to use more than one. For example, if your software needs to be user-programmable, then often it is a good idea to write the user-accessible parts in Python or Lua and the internals in C++. For another example, you may want to write some numerical routines in FORTRAN. The main reasons for using widely-used languages are (a) they are better maintained, and (b) when (not if) you run into difficult problems in using the language, you will probably be able to google for solutions because (remarkably often!) other people will already have encountered your problems and solved them for you.

Use the optimal language (or languages). Identify your problem domain (e.g. in my case computer music, trading systems, or e-commerce) and look at the leading software systems in that domain to see what languages they use. But if you think some other language would be better, go ahead and add it to the list. Make a scatter plot. The X dimension is frequency of usage in your problem domain (starting with the most used language). Rank the following in order of importance for your project: runtime efficiency, ease of writing code, ease of reading code, and size of user base. The weighted average of these features is the Y dimension of your plot (starting with the largest value). Plot each language as one point on both dimensions, and simply use the language that is closest to the origin. Use that language even if you have to learn it first. Do not use a language because you know it, or like it, or because it is fashionable – use the language that looks most likely to solve your actual problems. However, trust me – you can’t go wrong with C++.

If existing programs or libraries solve your problem, or part of it, then use them. Do not re-write software that already works. Prefer open source software to proprietary software – it is cheaper, it is easier to understand, and you can try fixing it yourself when (not if) you run into problems. Prefer third-party software with larger user bases, since such software is likely to be better maintained and to last longer. Also prefer software that runs on multiple platforms. Do not distract yourself from the difficulty of your real problems by playing with code for no good reason. The present paragraph is the third most important paragraph in this advice.

The choice of build system is almost as important as the choice of language. I prefer SCons, others CMake. Do not use autotools, or makefiles, or Visual C++ project files. Everything goes into one build file – libraries, programs, tests, documentation, installation, you name it. The present paragraph is the fifth most important paragraph in this advice.

You also need, in order of decreasing importance, a source code control system, a good visual source-level debugger, a good code editor, and a GUI form builder. There is no reason not to use an open source system, such as Git or Subversion, for source control unless your institution requires otherwise. In many cases Emacs is a good editor, in other cases Eclipse, in yet others Visual Studio.

Once you have got this far, create an empty source repository and empty build system for your project, write a stub application (or library, or whatever it will be), and compile it. The reason for doing a stub first is, that way you don’t begin by writing loads of code that turn out not to be correct for your platform or compiler. Every time you write some code, rebuild the project. Get rid of all compiler warnings if at all possible.

Now you are writing code that compiles without warnings. If your editor has an automatic code reformatting feature, use it constantly instead of manually formatting your code. If you are just getting started, consider using the default formatting of your editor. If your company has a policy on this or you want consistency with existing code, change the reformatter settings as required but do use the auto-reformatting.  Don’t put blank lines in your code, except between class and function declarations and definitions; because the more code there is on a page, the easier it is to read, and your idea of where to put a blank line inside a function is not the same as mine.

Do not comment code; instead, use understandable names for functions, variables, and classes – including understandable names for loop indexes and such. One reason is that you will change the code and forget to change the comments – therefore, comments can actually end up making code harder, rather than easier, to read. An even more important reason is that doing this also helps you to clarify the code, which helps to make sure that it really solves your problems (see below).

But there are important exceptions. You need a “Reference Manual” for your software. Whenever you need to document something for the “Reference,” do it only using Doxygen comments in the code. Also, if you are not sure whether you yourself have clearly understood an algorithm that you are about to code, or you think somebody else reading your code might have trouble understanding how a function or algorithm works, put in comments that explain what is happening as clearly and concisely as possible – always using complete sentences that start with capital letters and end with periods. Whenever you clarify the code, either remove or clarify these comments as well.

Do not abbreviate, either in code or in comments. Spell out everything, even if it takes more typing. The reason is that you will search for names (often), and if you use abbreviations you will not remember which abbreviation you used.

As for checkin comments, keep them as short as possible; their real purpose is to help someone looking at the source code tree to find changes, who will then look at the source code and comments in source code for more complete information.

Do not use the C preprocessor if at all possible (or any other similar facility), except to prevent multiple inclusions of header file contents. The preprocessor not only makes code harder to debug, it also makes code harder to read.

In C++, put as much code as possible into your header files. Put member function definitions inline, in the class bodies, in the header files. Either put each class into its own header file, or put all related classes into a single header file. In many libraries, you can put all the code into header files, or even one header file; and if you can, you should. If you are writing a plugin that does not expose a programming interface for compile-time linkage, then put all the code into one regular source file, including all declarations (i.e. no header file at all). The idea is to have the smallest possible number of files. Bigger files are easier to search and to read, and fewer files make the build system easier to maintain. Modern compilers won’t complain. Similarly, you want to have the smallest possible number of declarations, hence it is better to define functions inline in the class bodies in the header files.

Now you are ready to design. Start with the basic ideas. They will vary from project to project. For a library, that might be the API declaration; for a program, it might be the basic data structures and algorithms, or a database schema; for a plugin, it might be a data flow graph. Finish the declarations, implement stubs for all functions, declare instances of all objects in the test code, and get it to compile without warnings. Now start actually implementing functions…

All of the above is preliminary. The following is the core of my advice. The last numbered item is the most important, but the first item is the one to which you should pay the most constant attention.

  1. Do not write fancy, tricky code. Write plain, straightforward code. If at all possible, use only the facilities of the language itself. Never use a platform-specific system call or a function in a third-party library, if the C runtime library or the standard C++ library contains a function that does the same job. Indeed, if you are using C++, use the Standard C++ Library of collections and algorithms as much as possible, this is fantastic software, roughly as fast as assembler, usually does not require debugging, and is easier to read than what you would come up with. If your code is multi-threaded, prefer OpenMP, which is a part of the C++ and FORTRAN languages, to pthreads or Intel Threading Building Blocks, and implement multi-threading using task constructs at the highest possible level of granularity. Use only the core, stable features of the language. Use standard algorithms if at all possible. Write code that kiddies can read and understand. Your code should look just like the short examples in a beginner’s text for your language. Believe me, such plain code will run faster and be easier to maintain than fancy code. Yet clear code has an even more important role in testing the correctness of your ideas (as explained below).
  2. As you write your software, also write a program (or programs) to test it. Keep adding tests to this program or programs until your project is finished.  If you write code without testing all important use cases, your code will not be correct.
  3. If your code has a client-server structure, or a protocol stack, make completing a round trip through the stack the priority, before adding any other functionality to your project.
  4. Re-work the design of your code as often as necessary. This includes re-writing the “Program Specification.” Code expresses ideas. These ideas are like a scientific theory or set of engineering principles. Or rather, they are not like a theory; they are a theory – and a formal theory, to boot, since a programming language is a formal language. The ideas/code should capture the problem domain completely, without contradiction, and as concisely as possible. You must re-work your code until it produces the scientific or artistic “Ah-HAH!” sensation that tells you, both logically and intuitively, that you have captured the problem domain in this complete, consistent, and concise way. You can get the “Ah-HAH!” from a half-baked idea; but you cannot have a correct idea without the “Ah-HAH!”  If anything seems not quite right, or is bothering you even a little, you must rework the code. Such intuitions are the foundation of success in programming. Some examples of software that elicit this “Ah-HAH!” are the Standard Template Library in C++, the Jack system for Linux audio, the C programming language, the Scheme programming language, the Lua programming language, the SQL language for databases, the Swing user interface toolkit in Java, and the first spreadsheet programs. Examples of software that does not do this are the Java language itself, the C++ programming language, Microsoft Office, and all operating systems.

The reason for putting clarity and testing ahead of formulating ideas is that, in reality, you will not be able to write clear, concise code that runs without error until you have grounded your code in consistent, concise ideas that completely capture your problem domain. In other words, the tests are the engineering experiments that tell you whether your code/theory about the problem domain is correct.

Yet I have put the clarity of the code ahead even of passing all tests – because although it definitely is possible to write code that passes all tests, but does not express a complete solution to the problem, it is all but impossible to write such code clearly. In other words, the more clearly the code is written, the easier it is to spot what parts of the problem you have missed. The present paragraph is the most important paragraph in this advice.

To get the utmost runtime performance from your code, then in order of decreasing importance:

  1. If your code can be analyzed into tasks that can run at the same time without affecting each other, do this at the highest possible level of granularity and run them in parallel, e.g. using OpenMP.
  2. Make sure you are using the highest-performance libraries, e.g. for matrix operations use the Intel Math Kernel Library or Eigen.
  3. Make sure you have the optimal build options, e.g. in-line as much as possible, use auto-vectorization.
  4. Then cut to the chase. Do not try to figure out what code is fast and what is slow, unless you are a chip designer or a compiler engineer you cannot know enough about what the compiler is doing or will do. Use a profiler to measure the actual speed of your code. Basically, start with the block of code that eats the most time and speed it up. If you have been following my general advice here, this is not likely to do much good unless it enables you to select or design a more suitable algorithm to use for that block. Then go on to the block of code that eats the next most time and speed that up; and keep going until you are not significantly speeding up the project. “Significant” in this context is plus or minus 5 or 10 percent. Write up all results in the project log.

Sometimes the debugger is essential, but for the most part, you should avoid it like the plague. As a project grows, the software quickly becomes complex, and then tracing through the debugger can become unbelievably time-consuming. Rather than debug, re-read the code line by line and try to figure out what it is actually doing (thanks, Vipin!). Formulate a theory of what went wrong, and put in print statements to test your theory. You can narrow down where the problem by trying this in different places, working from higher up in the code on down to lower levels. However, if your problem crashes or throws an exception, use the debugger right away, because it will automatically show the stack trace at the time of the crash or exception. Write up all theories and test results in the project log. Note that on platforms with something like DTrace, you do not need to explicitly code print statements. On such platforms you should maintain a library of trace scripts for the project, perhaps in the body of the project log (another reason for using LaTeX, which uses plain text files). The present paragraph is the fourth most important paragraph in this advice.

If you work in an environment where code review is possible or required, by all means, use it. Don’t use it automatically; but do use it the instant you get into a bind, or any time you think someone else  is more likely than you are to solve your problems.

If you follow this advice, and especially if you internalize the philosophy of writing the clearest possible code and testing all use cases, you will be surprised at how little debugging you need to do. You will spend a very short time setting up your build system and stub project; what seems like forever fussing around, with your head hurting, while you re-work and re-work the ideas and clarify and re-clarify the initial code; then the “Ah-HAH!” will come, and you will finish coding all functionality and test cases in what seems like a remarkably short time – often, coding as fast as you can type.

When you are done you will also have a test suite, a “User’s Guide,” and a “Reference Manual;” and your code will probably port very easily to other operating systems.

The Black Hole Engine — Take Two

The Foundational Questions Institute (FQXi) has now awarded first prize in its annual essay contest to Louis Crane for Stardrives and Spinoza. This is a follow-up to Are Black Hole Starships Possible? by Crane and Shawn Westmoreland, which I have already commented on below.

The theme of this year’s contest is “What is ultimately possible in Physics?” I am always interested in everything in the essay — the limits of science indeed, the future of technology and the possibility of interstellar travel, and the theological questions that Crane brings up at the end of his essay.

Some things about the essay make me rather uneasy. I truly do not know what to make of Lee Smolin’s idea that to create a black hole, even a small one, is to create an entire new Big Bang and subsequent new universe. My uneasiness arises from the fact that I am not at all comfortable with the idea of being, as a black hole engineer, the likely creator and destroyer of a very large number of civilizations. Either this goes way above my pay grade, or it shows me that I am a drastically larger creature than I thought I was, or it shows me that no sentient being can avoid these dizzying and dismayingly unpredictable responsibilities.

It also makes me uneasy that Professor Crane avowedly sees this project as fusing scientific and religious motives. I prefer to keep them separate. I believe that my God is rather taller than even the most ancient of black hole engineers. While I am attracted to the idea of divinization up to a point, I feel that to identify the human (or the transhuman) and the divine, even as the limit of a sequence as Crane implies, is, in the end, nothing more than self-idolatry.

Setting aside the potential for human beings to assume apparently God-like powers, I regard Louis Crane’s line of thought as being of the first importance. One of his most striking points is that an artificial black hole could be an experiment to decide between theories of quantum gravity, by measuring alternative predictions for the phenomenology of Hawking radiation near the Planck scale in the real world.

But, of course, for me personally, the most exciting thing is a practical proposal, or at least a not obviously impractical proposal, for starships and a starship-based civilization and economy. Among other things, this turns the heat way up on Fermi’s old question, “Where are they?”

Also, the essay proceeds in good Gödelian style by reasoning in a mathematical and scientific way about a foundational question that has well-defined meanings both operationally and philosophically. In this way, it is possible to make theological inferences based on matters of fact, or at least, to clarify dilemmas and meanings.

Despite my theological reservations, I applaud Professor Crane, and I consider the essay to have the potential for influencing the future of human civilization. If it does, it may submerge and resurface in accordance with the spirits of the ages and their technical powers.

In his conclusion, Professor Crane writes:

We have outlined a plausible future where a central project for the whole
world gives us an almost infinite extension of our range as a result of an immense
collective effort. Depending on subtle points of interpretation of general relativity,
this could come with a new understanding of our place in the universe,
in which we play a role in its cycle of creation.

Again prescinding from the theological implications, I quite agree that such collective projects are highly desirable. I even fear that without them, we may well either not survive ourselves, or become a good deal nastier than we already are.

Note to science fiction writers: I’m sure Crane is quite correct that no more efficient engine can be made, but it seems clear that making such an engine is beyond the efforts of any individual or small group, at least until such time as one person can successfully direct the work of an entire large army of robotic spacecraft. The levels of difficulty in technology, and in the evolution of civilizations, are real.

Can we get down now? Can we get going?

Consequences of Nuclear Deterrence — Seen at Age 19 and Age 59

When I was 19 I thought and wrote about the future from the standpoint of a would-be “hard” science-fiction writer.

My basic thinking was to explore the long-term consequences of the nuclear balance of terror. I did not believe that the balance was truly stable (events have since proved me correct). I thought there actually would be a general nuclear war, and that it would leave survivors at least in the southern hemisphere, and that some of those survivors would still have nuclear weapons. In fact, the survivors either would have a monopoly on nuclear weapons, or they would keep on fighting until they had won such a monopoly. They would then spy on everyone using computer technology in order to keep that monopoly. They would become corrupt and brutal, and they would be very difficult to get rid of. I thought there would be no riddance of these dictators until we returned to frontier conditions by colonizing outer space. I felt (and still do) that the familiar science-fictional picture of a world pushed back into the dark ages by a nuclear war, affording survivors a test of virtue and a “fresh start”, is a romantic fantasy that ducks the hard questions.

We have not had a nuclear war — thank God.

But we have had several nasty near misses. And now, we have nuclear proliferation plus a great deal more surveillance than used to be possible. In fact, the logic underlying my original thinking still seems quite sound. Nuclear weapons (along with other factors) do seem to be pushing us into a world of widespread and frequent surveillance. It will, most likely, be an unofficial empire or, more likely, a co-dominium of great powers dedicated to preventing something like a nuclear war between Israel and Muslims that happens to kill a few hundred million people outside of the war zone, not to mention severe environmental consequences such as a decade of global cooling. If there is a co-dominium, then competition between the ruling powers, who are very unlikely to love each other, plus the resentment of the ruled, will create large zones containing what amounts to a dark age. Again, in spite of the increasing (although fragile) prosperity and health of the world on average, such zones do appear to be growing — Maoist sections of India, Nigeria, Congo, various drug-producing boondocks.

To possess true weapons of mass destruction — nuclear weapons or plagues — is completely irrational beyond the level of the absolute minimum required for deterrence. Once you have accomplished deterrence, additional weapons can only mean your own country, your own children, will be hit that much harder if the hammer comes down. To use such weapons would be even more irrational.

If the probability of their use does not decline to a very low level with time, then they will eventually be used. Any use will slam the world into a very bad place. Think 9/11 to the 10th power.

Answer me honestly now, using ordinary common sense: is the probability of use falling, or rising?

There is a nightmarish, “frozen” quality to this scenario. Nuclear deterrence makes it very difficult to actually win a large conflict. Conflicts, and even more the root causes of conflic, can continue to fester for long periods.

Structured Informel

Kristina and I saw Gerhard Richter’s “Abstract Paintings” at the Marion Goodman Gallery today. The paintings grasped me, and the room tilted this way or that depending on which painting held me in its gaze. Some of them — many of them — are quite colorful and beautiful. On the whole, the room had as much or more bodily impact than any other room of paintings I have entered, which is saying a lot since I have been going to galleries and museums since I was a child going with my parents, both of whom painted. My mother indeed was a fine artist herself, and both of her parents.

When I got home I turned to Richter’s own anthology of writings, The Daily Practice of Painting. I found that Richter accepted from an interviewer the appellation Informel, a term coined to denote informal process, especially in abstract painting.

Of course my own work in music depends utterly on formal languages and is as highly structured as you like. Still, there is something very important in common with Richter’s working method. Both of us spend a lot of time allowing some process to produce material, in ways not completely controllable or predictable in advance, and then we try to find whether the result is any good or not. This repeated decision making has very informal grounds, and in that sense algorithmic composition, appearances to the contrary, is as Informel as action painting.

But there may be more to it than that, and this bears thinking about. Certainly I am trying to bring into the algorithms themselves some sort of structure, some sort of guidance, something that filters according to musical perception and even tradition. But I can’t afford to lose the unexpected, the surprise, that which will demand a decision that must indeed be Informel. This is a fine dialectic. This automation is something I need to think about more — all the more necessary, to the extent that I do succeed in modeling perception or tradition.

Richter says painting is not imitating Nature, but I feel that the automatic processes both are an aspect of Nature, and to some extent an imitation of Nature, or simulation of it.

So, perhaps here is a way in which artifice and Nature are similar — when captured or simulated in algorithmic form, they both exemplify computational irreducibility. Both Nature and tradition embody generative structures.

But no matter how many or how sophisticated such generative processes may be, artist and audience alike must reduce them to the Informel — accept, or reject, on grounds that can never be completely explicated or formalized.