When we first started building programs to move between MusicXML and other music formats, we called them converters. Conversion implies the centrality of the change from one format to another. We have since realized that a more productive metaphor might be that of translation; the interpretation of one form of human expression into another . Our first software translation products are named after the 16th-century French translator Etienne Dolet.
At Recordare we have produced four MusicXML translators to date: two-way translators for Finale and MuseData, and one-way translators from NIFF and to Standard MIDI Files. Each translation brought up different issues of interpretation that need to be successfully addressed to make an effective interchange format.
Our MuseData translator was built together with the initial design of MusicXML. The first version of MusicXML was primarily an adaptation of the MuseData format into XML form, with the addition of a timewise format to simulate Humdrum’s two-dimensional lattice structure within a hierarchical language.
Our design decision was to make MusicXML a superset of MuseData, so that we could do a 100% conversion (as we thought of it then) from MuseData into MusicXML and then back again. We adopted features in entirety even when we were unclear of their utility for a general-purpose translation language. Since the initial design, we have removed some of the documented features that are not used in practice, and included some undocumented features that are indeed used in MuseData files available from CCARH.
Given that MusicXML covers a superset of MuseData features, we encountered no major translation difficulties with this first piece of software. This gave us confidence that the XML language was indeed strong enough to serve as a basis for music representation without hidden problems that would only be revealed through implementation experience. The translation issues have emerged later, with more programs translating back and forth to MusicXML. These programs may make use of features that are present in MusicXML but not in MuseData, or may be used to translate classical repertoire from later eras than MuseData’s design focus. For instance, translating pieces by Chopin, Mahler, and others with multiple large tuplets causes problems when the number of divisions needed for precise durations leads to notes with durations that cannot fit into a 3-character MuseData field.
We next added translators for two binary formats: from NIFF to MusicXML and from MusicXML to Standard MIDI Files. By testing translation with NIFF’s highly graphical format and MIDI’s performance-only format, we could determine whether MusicXML really did have the scope to handle a variety of music formats far afield from our MuseData and Humdrum starting points.
Building the NIFF translator is what gave us our more detailed understanding of the problems with highly graphical formats for music interchange. While our prototype works fine for testing purposes, it was clear that building an industrial-strength NIFF translator, while possible, would take a great deal of time and effort. We decided instead to persuade the author of the most commercially important application writing NIFF files (SharpEye Music Reader) to write MusicXML files as well. SharpEye was the first product to support the MusicXML format, and their implementation experience demonstrated that MusicXML could indeed be implemented successfully by third party developers.
Given the existence of MuseData to MIDI translators, we were not surprised when the MusicXML to MIDI translator posed no major challenges. An interesting aspect of both these translators is our use of XML as an intermediate format for both the NIFF and MIDI files. This creates an easier-to-program structure for these binary formats.
Translating to and from Finale posed the largest challenges for MusicXML to date. As a fully-featured industrial application, it poses the expected challenges of dealing with a program whose feature set exceeds that of the interchange format. In many cases we added features that were necessary to support effective import from SharpEye to Finale (for instance, system and page breaks), but others are still unsupported.
The more interesting issues come from the differences in structure between Finale and MusicXML files. In many of the fundamentals, there is a great deal of similarity. Finale’s underlying frame structure (a single measure on a single staff) has up to four layers, and each layer can have one or two voices. The layers and voices are similar to MusicXML’s
<voice> element. Moving between layers and voices are handled in Finale by means very similar to the
<forward> elements in MusicXML.
When we get to articulations and expressions, things work very differently in Finale and MusicXML. Finale is designed to be open-ended and extensible, so there are few of the bult-in abstractions present in MusicXML. These abstractions must instead be inferred from the definition of a musical symbol in the Finale database.
As an example, what Finale structure should translate to a
<staccato> element in MusicXML? Is it:
In practice we have found that the font definition works more reliably. Finale is a notation program, and people generally pay much more attention to appearance than playback within Finale files. But using this definition limits your translation to fonts that you have seen before, so that you know what musical glyphs are associated with each code in a music font.
Currently, the major barrier to even better Finale/MusicXML translations is the incomplete documentation for the Finale format provided in the current Finale 2000 plug-in developer’s kit. This is akin to a translator working just by the context of word usage, with only an incomplete set of dictionaries for reference. Coda has suggested that this documentation may be improved in a future version of the developer’s kit.