# A Computational Model for Rule-Based Microtonal Music Theories and Composition

This is an early draft of a paper, which was later published as
Anders, Torsten, and Eduardo Miranda. 2011. A Computational Model for Rule-Based Microtonal Music Theories and Composition. Perspectives of New Music. 48(2).
In contrast to the published version, however, this draft includes the full source code for all examples, in order to make the research reported by this paper reproducible. Also, sound output is provided for the examples in music notation (although the sound quality is not at all fancy). Warning: some of the code blocks are rather long, and the text (written for an audience of musicians) does not explain the code. So, on a first reading just skim the code (or read the published version of the paper), and return to the code blocks if you want to study implementational details of the examples. Note that the sources for several examples depend on code defined in GlobalDefs.oz.

## Motivation

Microtonal music is an influential facet of 20th and 21st century music composition. Composers that contributed significantly to microtonal music include characters as diverse as Julian Carrillo, Ben Johnston, Harry Partch, Horatiu Radulescu, Karlheinz Stockhausen, James Tenney, Ivan Wyschnegradsky and La Monte Young (in alphabetic order).

As microtonal music opens wide areas of uncharted musical territory, computational support can be very helpful for navigating this unfamiliar landscape. For example, various software plays back microtonal music, which allows for listening to the music during the composition process. Most sound synthesis programming systems allow for microtonal sound generation (e.g., Csound {Boulanger, 2000}; SuperCollider {McCartney, 2002}; Max/MSP {Puckette, 2002} and PureData {Puckette, 1996}). Other systems assist in the development and analysis of microtonal scales, such as Scala by Manuel Op de Coul; CSE by Aaron Hunt; and L'il Miss' Scale Oven by Jeff Scott. These programs can also help to retune various MIDI synthesizers and samplers.

In this paper, however, we are interested in computational support for the composition process itself, a field commonly called computer-aided composition (or algorithmic composition). For example, consider a composer who wants to create a progression of microtonal chords that follows some rules on harmony. Some of her rules are inspired by conventional harmony. For example, a relatively simple rule states that consecutive chords should often share common tones. She conceives other rules for a specific piece or section she is working on (e.g., certain chords should contain specific microtonal intervals). The composer plans to use different textures in her piece. For instance, some sections consist of a melody with accompaniment; other sections are contrapuntal. These textures should always express an underlying microtonal harmony progression. The rules on harmony are complemented by rules on the individual parts. For example, non-harmonic tones may be allowed for more smooth melodic lines, but these are restricted by specific rules in order to make the harmony still recognizable (e.g., passing tones may be allowed). Other rules restrict simultaneous notes (e.g., she may want to avoid unisons and octaves). The composer may also want that each part consists of certain motifs.

Many existing computer-aided composition systems support microtonal pitches, including often-used systems such as Max/MSP & PureData; OpenMusic & PWGL {Assayag, "Computer Assisted Composition at IRCAM: From PatchWork to OpenMusic", 1999; Laurson et a., 2009}; SuperCollider {McCartney, 2002}; JMSL {Polansky et al., 1990; Didkovsky, 2001}; Common Music {Taube, 1997}; and Fractal Tune Smithy by Robert Walker. However, complex music theories such as a microtonal theories of harmony or counterpoint sketched above are difficult to model with these systems.

Music theories such as harmony or counterpoint are traditionally stated in a modular way by a set of rules, where musical parameters (e.g., a single pitch) are often affected by multiple rules at the same time. This approach allows for a formal description of a complex network of interval relations in music, which is necessary for theories of harmony or counterpoint. Important interval examples are the sequence of intervals in a melody, the intervals between melodic peaks, the set of intervals between simultaneously sounding notes, the set of intervals that form an implied harmony (which may last longer than individual notes), the intervals between chord roots, the intervals that form an underlying scale (mode), how scales/modes are transposed in modulations and so forth.

The systems mentioned above only partly embrace this complexity. These systems make it very hard to describe a network of interval relations, because they make it hard to affect parameters (e.g., pitches) by more than a single rule at a time. For example, typically either only the horizontal (melodic) or only the vertical (harmonic) dimension is controlled. This restriction is caused by the underlying programming model of these systems, which efficiently map sets of known values to sets of values to compute (as in a function).

Complex music theories that describe a network of interval relations are far more easily formalized using a programming model based on bi-directional relations (as in first-order logic). We propose to use a computational model for microtonal computer-aided composition that stems from logic programming. More specifically, we use constraint programming {Apt, 2003}, which is more efficient than logic programming, in particular for numeric relations (it employs consistency checking or constraint propagation algorithms).

A musical constraint satisfaction problem (CSP) can be seen as a computer program implementing a mathematical model of a music theory. It defines a music representation (score) where some aspects are represented by variables (i.e. unknowns), and relations between these variables are restricted by a set of constraints (rules). For example, the pitches of notes in the score and the underlying harmonic structure may be unknown in a CSP definition. Nevertheless, variables have a domain, that is a set of values they may take in a solution. A constraint solver finds one or more solutions for the problem. In a solution, the domain of each variable is reduced to a single value that is consistent with all its constraints.

Constraint programming has been used before for modeling music theories. Anders and Miranda {"A Survey of Constraint Programming Systems for Modeling Music Theories and Composition", in print} provide an extensive survey, which introduces various musical CSPs and systems. However, we are not aware of any previous research that applies constraint programming to modeling microtonal music.

The proposed model and all case studies presented in this paper have been implemented in Strasheela. Strasheela is a constraint-based computer-aided composition system that allows users to model their own music theories as musical constraint satisfaction problems. Strasheela supports a wide range of music theories, and it provides a rich toolbox that simplifies such definitions. Strasheela is freely available at http://strasheela.sourceforge.net/.

Our model extends Strasheela's core functionality by a constrainable representation for musical concepts such as scales and harmony. These representations support both the standard Western tuning (12 tone equal temperament) and microtonal tuning systems. For simplicity, this paper refers to the proposed model as Strasheela (i.e. its implementation).

### Plan of Paper

The rest of this paper is organized as follows. The next four sections introduce a computational model for composing microtonal music. These sections explain how music is represented in this model, point out the variables in this representation that can be constrained, and discuss the definition and application of constraints.

The subsequent four sections present a number of concrete case studies. These sections demonstrate that the presented model is suitable for implementing microtonal music theories in the disciplines of harmony, melody and counterpoint. The paper ends with a summary.

## Pitch Representation

### Microtonal Pitches

Composers of microtonal music use various tuning systems, but two approaches to tuning are particularly popular. Keislar interviewed a number of important American composers of microtonal music {Keislar et al., "Six American Composers on Nonstandard Tunings", 1991}. The interviewed composers rather clearly fall into these two camps.

Some composers use equal temperaments (ET) that subdivides an interval – most commonly the octave – into a number of equal steps {Blackwood, 1991}. The ubiquitous example is twelve-tone equal temperament (12-TET) that consists of 12 equal steps per octave. Of the interviewed composers, John Eaton primarily uses 24-TET (his performers are nevertheless free to inflect these quartertones), Joel Mandelbaum uses 31-TET, and Easley Blackwood has explored all ETs from 12-TET to 24-TET.

Other composers prefer just intonation (in the interview, Lou Harrison and Ben Johnston). Just intonation (JI) uses intervals that can be represented by whole number frequency ratios. Small-integer ratios play an important role due to their perceptual quality as consonances {Doty, 2002; Johnston, 1964}. For example, the ratio 3:2 corresponds to the interval of a pure fifths, and 7:4 is a harmonic seventh. The harmonic complexity of JI intervals is often quantified by their odd limit, which – for intervals up to an octave – is the largest odd number in a ratio {Partch, 1974}. For example, the just minor sixth 8:5 is 5 odd limit, while the subminor third 7:6 is 7 odd limit. The set of all intervals denoted by some odd limit includes lower limit intervals: the set of 7 odd limit intervals also includes 8:5. A related concept is the prime limit – the largest prime factor present in any JI intervals, which is more commonly used for discussing scales instead of individual intervals. Note that in contrast to ETs, JI has an infinite number of pitches per octave: the repeated transposition by any JI interval always brings up new pitches,

Further temperaments reduce the total number of pitches and thus the cognitive workload required by closely approximating JI intervals. Such temperaments level JI intervals that are very close to each other (i.e., temper out certain commas) and distribute the resulting pitch shift over the pitches of the temperaments. A well-known examples is meantone temperament {Barbour, 2004; Leedy, 1991}, where the syntonic comma is tempered out. For the interval C-E, this temperament does not distinguish whether these tones are one just major third or four octave-transposed fifths (a Pythagorean third) apart. Quarter-comma meantone tempers out the difference between these intervals by slightly reducing the size of all fifths but leaving major thirds in JI. Note that the difference between a just and a Pythagorean third is not expressed in common Western music notation either: meantone temperament was not only a compromise for reducing the number of keys on keyboard instruments, but deeply influenced Western musical thinking and compositional practice as well.

The three approaches to tuning systems outlined above are related. ETs can also also approximate JI, like the third approach discussed above. For example, 31-TET {Fokker, 1955} very well approximates quarter-comma meantone, which in turn closely approximates the intervals of 7-limit JI (odd limit). A formal approach to tuning that unifies these three approaches are regular temperaments. A regular temperament {Milne et al., 2007} generates all pitches of a tuning with a finite number of intervals called generators. For example, the generators of meantone are its flat fifth and the octave: all meantone pitches can be generated by repeated transpositions of these intervals. JI requires a unique generator for each prime, while the generator for any ET is its smallest interval.

Finally, well temperaments also approximate JI intervals, but certain intervals and in particular interval combinations in certain keys are tuned more close to their JI counterpart than others {Barbour, 2004; Polansky et al, 2009}. As a result, these tuning systems are irregular temperaments where different keys sound differently.

### Pitches and Pitch Classes

Strasheela allows for highly complex music theory definitions, and therefore efficiency is crucial for practical use. Constraint programming provides highly optimized algorithms for solving problems involving integer domains and sets of integers. Our model therefore makes only use of variables with such domains.

The pitches of equal temperaments are naturally expressed by integers. The proposed model and its implementation in Strasheela supports arbitrary equal temperaments. However, which pitch a pitch-integer actually means depends on the chosen number of pitches per octave. By default, pitch-integers are interpreted as midi keynumbers (i.e. 12-TET, where 60 is middle C) when outputting resulting music to sound synthesis formats (e.g., MIDI, Csound) or music notation. Arbitrary other equal temperaments can be chosen. Popular examples are 19-TET, 22-TET, 31-TET, 41-TET, 53-TET and 72-TET, because these approximate certain JI intervals very well. In addition, high pitch resolutions such as cent (1200-TET) or even millicent (120000-TET) are available.1 In the rest of this paper, whenever we use the term pitch in the context of music modeling we refer to pitch-integers.

Pitches are variables in our model, and so are higher-level pitch-related concepts such as pitch classes or scale degrees. Remember that the value of variables can be unknown, and that variables can be constrained. For example, compositional rules applied by users are constraints; these will be discussed later. Some constraints are implicitly applied and are part of the music representation. In particular, the interrelation between the different pitch-related concepts (e.g., pitches and pitch classes) are defined as constraints. Formally, there is no difference between user constraints and these implicit constraints.

Figure 1 shows an implicit constraint example: it defines the well-known relation between a pitch on the one hand, and the corresponding pitch class and octave on the other hand for arbitrary equal temperaments.2 As customary, pitch classes are represented by integers starting from the first note C (pitch class 0) and numbering all tones within a single octave. Following a convention from logic programming, all variables are notated starting with a capital (psPerOctave, the pitches per octave is fixed per CSP).

Pitch operations such as transposition are also formulated as constraints on variables. Pitches of arbitrary ETs are transposed simply by adding a transposition interval. Pitch class transposition wraps around'' at the number of pitches per octave, which is implemented with a modulus constraint. More generally, intervals between variables in the music representation can be seen as intervals in the sense of Lewin, and constraints can express Lewin's transformations {Lewin, 1987}.

### Scale Degrees and Chord Degrees

A degree is an integer variable that serves quasi as an index into a pitch class sequence. The combination of a degree and its associated accidental (also an integer variable) correspond to a pitch class, relative to a given pitch class sequence.

The notion of a scale degree is well-known (typically notated by a Roman numeral). The C-major scale is represented in 12-TET by the following sequence of pitch classes [0, 2, 4, 5, 7, 9, 11]. The degree 3 (Roman numeral III) of C-major is the pitch class 4 – if the degree accidental is 0 (natural). The accidental is a pitch class transposition interval that serves as an offset from the actual pitch class at the degree position. For example, degree 6 with accidental -1 (flat) of C-major corresponds to the pitch class 8 (VIb is Ab).

Strasheela's degree constraint is applicable for chords as well. The notion of a chord degree makes it possible to refer to specific chord pitches. For example, the third degree of a major triad is its fifth, with an accidental -1 it is a diminished fifth.

Figure 3 formally defines the degree concept for arbitrary ETs. The pitch class at the position Degree in the pitch class sequence PCs is accessed with the select-constraint and bound to the auxiliary variable NthPC. The corresponding pitch class PC is NthPC pitch-class-transposed by the Accidental. Again, the degree, its accidental and the pitch class are variables and so are the pitch classes in PCs.

The formal definition of degree transposition is left out for brevity. Again, this transposition wraps around'' at the boundaries of the degree sequence, but the presence of accidentals makes it more complex.

## Hierarchic Music Representation

So far we only discussed pitch representations. Strasheela's music representation in fact supports arbitrary symbolic score information, and it organizes this information in a hierarchic fashion.

The Strasheela music representation is designed in such a way that ultimately users control what information is contained in the score. For example, variables for chord or scale degrees introduced above are only contained in the score if required. The music representation predefines models for a range of music theory concepts such as notes, intervals, scales, motifs and so forth. Users construct a score by assembling these score objects as required.

Score objects encapsulate a number of attributes. For example, the attributes of a note include its start time, duration, end time, amplitude, its pitch and so forth.3 The values of these attributes are variables that can be constrained.

Score objects are hierarchically nested: container objects can hold other objects, including other containers. Two kinds of containers are particularly important. A sequential container imposes an implicit constraint that its contained objects follow each other in time, while a simultaneous container constraints that its content runs parallel in time.4

Scores can be output into various formats for music notation (e.g., Lilypond, and MusicXML via Fomus) and sound synthesis (e.g., MIDI and Csound). These export facilities can be flexibly customized by programming. For example, users can define how specific score objects are output. Using this approach, the microtonal notation examples in this paper have been created by mapping pitch classes to certain pitch notations with Lilypond.

Although Strasheela's music representation building blocks model their music theory concepts in a highly generic way, users may require additional building blocks that model their own theory. Because Strasheela is a programming system, its music representation is highly extendable by programming means. For example, building blocks are implemented as classes in the object-oriented programming sense, and users can extend them by inheritance. Nevertheless, this paper describes Strasheela's capabilities for musicians and not programmers, and therefore leaves out implementational details. For an extensive discussion of technical details, the interested reader is referred to {Anders, 2007}.

## Representing Intervals, Chords and Scales

The Strasheela music representation provides extensive support for analytical information. Score objects such as chords and scales do not sound when a score is played back, but explicit representations of this information greatly simplify the definition of music theories such as harmony or counterpoint. The following paragraphs describe the representation of these score objects. The description focuses on chords and scales, because these objects play a particular important role in the case studies presented later. The representation of intervals uses a similar overall programming approach.

Chord and scale objects contain a number of attributes. For example, important attributes are their Root (a pitch class) and PitchClasses (a set of pitch classes, which includes the root). In the following we only describe the chord definition; the scale definition is the exactly the same (both classes inherit from the same superclass).

Chord objects should be able to distinguish between different chord types (e.g., major vs. minor chord) and these types should be user-definable. This requirement is addressed by a database'' of possible chords. Each database entry contains a number of fields that describe a chord. For example, the following database entry defines the harmonic seventh chord (also called 7-limit dominant seventh) in pseudo code syntax that resembles Strasheela's Oz syntax.

chord(pitchClasses: [4:4 5:4 6:4 7:4]
root: 4:4
comment: 'harmonic seventh')

Pitch classes in the database can be notated in different formats including pitch class integers, ratios (as in the example above), or symbolic note names. Internally, any format is transformed to pitch class integers for the current number of pitches per octave in order to allow for integer constraint propagation. Nevertheless, the original format is preserved as well and can be used for interpreting solutions (e.g., the original ratios can be used for adaptive just intonation, see below).

Users can define their own databases, or use (and extend) existing databases. Strasheela predefines a large set of database entries. For example, it provides over 50 different chord and 100 scale types for 31-TET (many entries stem from the Scala software).

Entries in the chord database on the one hand and chord objects on the other hand are linked by the object's attribute Index. The meaning of this attribute depends on the current database. If Index=1, then the chord object's type is set to the type of the first chord in the database and so forth. The index is a variable that can be constrained and which effects other chord attributes (e.g., its pitch class set).

While a chord database always contains only a single transposition of a chord, a chord object instance can transpose entries in the database. The attribute Transposition measures the transposition amount as a pitch class interval.

As explained above, a Strasheela score is typically nested in a temporal hierarchy of sequential and simultaneous containers. An underlying harmonic analysis of chord and scale objects can be stored in sequential containers that run parallel to the rest of a score (or a score segment). By default, notes are implicitly related to their simultaneous chords and scales.

It should be mentioned again that Strasheela is highly extendable. The representation scheme of intervals, chords and scales has been extended to represent additional information. For example, an extension (subclass) of the chord representation stores the chord inversion. Other chord extensions control the relation between chords and an underlying scale, and store the scale degree of a chord root.

Additional information can also be added to the databases for intervals, chords and scales, and these score objects can be constrained accordingly. For example, the dissonance degree of an interval or chord can be added (e.g., Euler's gradus suavitatis), or essential pitch classes of a chord can be marked (e.g., a dominant seventh chord is recognized unmistakably by its root, third and seventh, while the fifth is not essential for this chord). Most importantly, users can extend the representation according to their own needs.

## User Constraints

The previous sections presented a music representation for microtonal music: users model microtonal music theories by assembling a music representation instance with the building blocks provided, and by defining and applying constraints to the variables in this music representation instance. For example, a homophonic chord progression can be modeled with a sequence of chord objects running in parallel to multiple note sequences representing the parts. While some constraints are implicitly applied by the system (e.g., the relation between the pitch and pitch class of a note, as presented above), most constraints are explicitly applied by the user. For example, the user may constrain that the roots of all chord objects are pairwise distinct. A constraint restricts the relation between a set of variables, as has been shown in Figure 1 and Figure 3 above.

Constraints are freely applied to arbitrary sets of variables. However, a single constraint is often applied multiple times to similar variable sets, and such variable sets can be rather complex. For example, in conventional counterpoint a passing tone that is relatively short and on an easy beat can be dissonant. When this rule is modeled as a constraint, the constraint is applied to every potential passing tone. Also, this constraint must have access to a complex set of variables in order to decide whether a certain note can be dissonant or not. Strasheela supports a convenient and fully generic mechanism for constraint application {Anders et al., "Constraint Application with Higher-Order Programming for Modeling Music Theories", 2010}.

While Strasheela's true power lies in the fact that users can define their own constraints from scratch, for convenience the system pre-defines a wide range of constraints. For example, Strasheela provides many pattern constraints that restrict a sequence of integers in various ways, and it makes generalized versions of many harmonic or counterpoint rules available.

The paper so far proposed a computational model for microtonal music, which consists of a music representation and support for constraining variables in this representation. We will now present a number of concrete case studies of microtonal music theories implemented with this model. These case studies are situated in different music theory sub-disciplines such as harmony, melody and counterpoint. The case studies also demonstrate different equal temperaments. All music theory models have been implemented with Strasheela.5

## Harmony

The case studies presented in this section model harmony. Above we introduced the notion of analytical chord objects. This representation is an essential building block for the harmony models below.

### Diatonic Cadence in 12-Tone Equal Temperament

The first case study models a harmony task from common practice music: it creates diatonic cadences. Its music representation consists of a sequence of analytical chord objects and a scale object. The pitch classes of these chords and scales are represented in 12-TET, to start with a well-known tuning system.

The model applies the following constraints to its music representation.6 The scale object is set to a C-major scale (for simplicity, the root 1/1 has be set to C for all examples in this paper). The chord database specifies only triads (major, minor, diminished, augmented). Only diatonic chords are permitted: the pitch class set of each chord must be a subset of the scale's pitch class set. Consecutive chords in the sequences share common pitch classes (harmonic band {Schoenberg, 1986}), but consecutive chords must be distinct (i.e. either the chord types, transpositions or both differ). The first and last chords must be equal. Finally, the chord sequence ends in a cadence: the union of the pitch classes of the last three chords constitutes the full pitch class set of the scale.7 Also, the root of the last chord is constrained to the root of the scale

%%
%% Note: only explorer out -- translation to chord notation manually
%%

%% ?? TODO: def some simple Lily output

declare
MyScale = {Score.makeScore scale(index:{HS.db.getScaleIndex 'major'}
transposition:{HS.pc 'C'})
unit(scale:HS.score.scale)}
%%
/** %% CSP with chord sequence solution. Only diatonic chords, follow Schoebergs recommendation on good roor progression, end in cadence.
%% */
proc {MyScript ChordSeq}
%% settings
N = 5                        % number of chords
Dur = 2                      % dur of each chord
%% only specified chord types are used
ChordIndices = {Map ['major'
'minor'
'diminished'
'augmented']
HS.db.getChordIndex}
%% create chord objects
Chords = {LUtils.collectN N
fun {$} {Score.makeScore2 chord(index:{FD.int ChordIndices} duration:Dur timeUnit:beats) %% label can be either chord or inversionChord unit(chord:HS.score.chord)} end} in %% create music representation for solution ChordSeq = {Score.makeScore seq(items:Chords startTime:0) unit} {HS.rules.neighboursWithCommonPCs Chords} {HS.rules.distinctNeighbours Chords} %% First and last chords are equal (neither index nor transposition are distinct) {HS.rules.distinctR Chords.1 {List.last Chords} 0} %% only diatonic chords {ForAll Chords proc {$ C} {HS.rules.diatonicChord C MyScale} end}
%% last three chords form cadence
end
MyScores =
{SDistro.searchAll MyScript unit(order:startTime
value:random
% value:mid
)}
%% TODO: output to music notation
{Browse {Map MyScores fun {$MyScore} {MyScore toInitRecord($)} end}}

For 5 chords, there exist only 3 solutions for this music theory model. Strasheela supports finding all solutions of a CSP. One solution is shown in Figure simple-cadence, where the chord objects are notated using common chord symbols.

### 7-Limit Harmony

The next case study introduces 7-limit intervals, and thus goes beyond the scope of common practice harmony. Also, this case study defines a clearly more complex theory of harmony.

We will use 31-TET, because this temperament provides close approximations of 7-limit intervals, as has been mentioned above. 31-TET can be notated with the common sharps and flats plus quartertone accidentals. A quartertone sharp raises by one step (38.71 cent), a chromatic semitone (e.g., C-C#) are two steps, and a diatonic semitone (e.g., C-Db) are three steps. The interval 7:4 is represented by the augmented sixth (e.g., C-A#, 25 steps): this interval is only 1.1 cent smaller than the just 7:4.

31-TET allows for very many colorful chord types that are outside conventional harmony. This case study uses two tetrads that are particular because they consist of only consonant intervals: the harmonic seventh chord (ratios 4:5:6:7) and the subharmonic seventh chord (1/4:1/5:1/6:1/7).

In this case study we do not want to impose any key (unlike the cadence example above): the two chord types can be transposed to any of the 31 tones of the temperament in principle. Nevertheless, various constraints are applied in order to obtain a smoothly connected chord progression.

The most important harmonic constraints are inspired by {Schoenberg, 1986}. Schoenberg distinguishes between so-called ascending progressions (e.g., V-I or III-I; using common Roman numerals for notating root progressions), descending progressions (e.g., I-V or III-V), and super-strong progressions (e.g., I-II).

We generalized Schoenberg's root progression guidelines for microtonal music by formalizing his explanation instead of his actual rules {Anders et al., " A Computational Model that Generalises Schoenberg's Guidelines for Favourable Chord Progressions", 2009}. Briefly summarized, two consecutive chords form an ascending progression if both chords share common pitch classes, but the root of the second chord does not occur in the first chord. In a descending progression, a non-root pitch class of the first chord becomes root in the second chord. A super-strong progression consist of two chords that do not share any pitch classes.

This case study follows Schoenberg's recommendation for the treatment of descending progressions. Ascending progressions are used freely, but a descending progression must be 'resolved', quasi as a 'passing chord': in a sequence of three chords C1,C2,C3 the sequence C1,C2 can only be descending if C1,C3 is ascending. This case study completely disallows super-strong progressions to obtain a more smooth progression.

Figure 7limit-progression shows a solution of this case study. The music representation of this case study consists in 4 sequences of notes and a sequence of chord objects that run in parallel. The notation of the chord objects extends the common chord symbol notation by symbols for the 7-limit chords. The annotation harm7 indicates a harmonic seventh chord and subharm7 a subharmonic seventh chord.

declare
{HS.db.setDB ET31.db.fullDB}

/** %% CSP with chord sequence solution.
%% */
proc {MakeChords Chords}
%% settings
N = 9                        % number of chords
Dur = 2                      % dur of each chord
%% only specified chord types are used
ChordIndices = {Map ['harmonic 7th'
'subharmonic 6th']
HS.db.getChordIndex}
in
%% create chord objects
Chords = {LUtils.collectN N
fun {$} {Score.makeScore2 chord(index:{FD.int ChordIndices} duration:Dur %% just to remove symmetries % sopranoChordDegree:1 timeUnit:beats) %% label can be either chord or inversionChord unit(chord:HS.score.inversionChord)} end} %% Good progression: ascending or descending progression only as 'passing chords' {HS.rules.schoenberg.resolveDescendingProgressions Chords unit} %% no super strong progression in such a simple progression {Pattern.for2Neighbours Chords proc {$ C1 C2} {HS.rules.schoenberg.superstrongProgressionR C1 C2 0} end}
%% First and last chords are equal (neither index nor transposition are distinct)
{HS.rules.distinctR Chords.1 {List.last Chords} 0}
%% roots of all other chords are distinct
{FD.distinct {Map Chords.2 fun {$X} {X getRoot($)} end}}
%% first chord is harmonic dominant seventh in C
{Chords.1 getIndex($)} = {HS.db.getChordIndex 'harmonic 7th'} {Chords.1 getRoot($)} = {ET31.pc 'C'}
%% 30-70% are 'subharmonic 6th' chords
{Pattern.percentTrue_Range
{Map Chords proc {$C B} B = ({C getIndex($)} =: {HS.db.getChordIndex
'subharmonic 6th'})
end}
30 70}
%% chord indices form cycle pattern
{Pattern.cycle {Map Chords fun {$C} {C getIndex($)} end} 3}
%% All chords are in root position.
{ForAll Chords proc {$C} {C getBassChordDegree($)} = 1 end}
end
{GUtils.setRandomGeneratorSeed 0}
[MyScore] =
{SDistro.searchOne
proc {$MyScore} MyScore = {Segs.homophonicChordProgression unit(voiceNo: 4 iargs: unit(inChordB: 1 % inScaleB: 1 ) %% one pitch dom spec for each voice rargs: each # [unit(minPitch: 'C'#4 maxPitch: 'A'#5) unit(minPitch: 'G'#3 maxPitch: 'E'#5) unit(minPitch: 'C'#3 maxPitch: 'A'#4) unit(minPitch: 'E'#2 maxPitch: 'D'#4)] chords: {MakeChords} startTime: 0 timeUnit: beats %% customise notation: distribute 4 voices over 2 staffs makeTopLevel: fun {$ Voices End Args}
UpperStaffVoices LowerStaffVoices
in
{List.takeDrop Voices 2 UpperStaffVoices LowerStaffVoices}
%%
{Score.make
sim([seq([sim(UpperStaffVoices)])
seq(%% invisible grace note necessary to put clef at the very beginning
info:lily("\\clef bass \\grace s")
[sim(LowerStaffVoices)])
seq(info:lily("\\set Staff.instrumentName = \"Anal.\"")
Args.chords
endTime: End)
%                                seq(Args.scales
%                                    endTime: End)
]
startTime: 0)
unit}
end)}
end
%% left-to-right strategy with breaking ties by type
HS.distro.leftToRight_TypewiseTieBreaking
}
{MyScore wait}
{RenderLilypondAndCsound_ET31 MyScore
unit(file:"7-limit-progression-tmp")}

This case study shapes its result with several further harmonic constraints. The first and last chord are set to the harmonic seventh over C. All chords are in root position, and the roots of all chords (but the first) are pairwise distinct. 30-70 percent of the chords must be of the type subharmonic seventh chord. Also, the chord types form a cycle pattern that repeats after every 3 chords (in Figure 7limit-progression, the pattern is harm7, subharm7, harm7, …).

The case study also applies various voice-leading rules. The domain of the 4 parts is restricted to the tessiture of vocal music. Melodic intervals are restricted to a fifths at most (larger intervals are allowed in principle for the bass, though they do not occur in the presented solution). The harmonic intervals between upper voices are restricted not too exceed an octave (larger intervals are allowed between bass and tenor). Open and hidden parallels of perfect consonances are prohibited. Voice crossing is not allowed. Finally, if consecutive chords share pitch classes, then these are repeated in the same part and octave (a simplified version of Bruckner's law of the shortest way'' {Schoenberg, 1969}).

Remember that the theory of harmony implemented in this section is only an example, the strength of the proposed model is in fact that users can implement their own theory. For example, instead of applying the Schoenberg-inspired constraints above, consecutive chords could be connected by a smooth voice leading {Straus, 2003}. In this alternative approach, the voice leading distance is the minimal sum of pitch class intervals between two chords (not pitch intervals!). For example, the voice leading distance between the chords C7 and Ab-maj7 in 12-TET is 2 (C->C=0 + E->Eb=1 + G->Ab=1 + G->G=0).8 Constraining the voice leading distance to a small value results in a smooth chord progress and vice versa. Such constraint could also be combined with other constraints on harmony, for example, chord pitch classes can be restricted to some underlying scale, or consecutive chords could be constrained to share common pitch classes (harmonic band).

The search process has been randomized in the above and the following case studies. In other words, different solutions are typically found if a CSP is solved multiple times.

Most musical instruments allow performers to inflect pitches considerably. Such instruments do not have the limitations of static scales, because the musicians can adapt their intonation depending on the context. When computationally modeling microtonal music theories, it is desirable to also support such an adaptive tuning behavior. Sethares {2005} surveys several technical approaches to adaptive JI.

A classical approach to adaptive JI has been proposed by Nicola Vicentino in the 16th century {Vicentino et al., 1996}. In Vincentino's approach, two manuals of the harpsichord are tuned to 1/4-comma meantone (ideally each with 19 notes per octave), but one manual is tuned a 1/4-comma higher than the other. This arrangement allows a musician to justly intonate all triads that are available in meantone: the just major thirds over any root are provided by meantone itself, and the narrow meantone fifths and minor thirds can be adjusted by using the corresponding tones of the other manual. Note that this approach tunes chords in JI, but the intervals between chord roots are still meantone-tempered, which avoids the problem of pitch drift, common for JI.

Strasheela supports an approach to adaptive JI that generalizes Vicentino's idea. As has been explained above, chord database entries can be defined with JI ratios in Strasheela. These ratios can be used for tuning. In Strasheela's adaptive JI, chord roots are tuned according to the current ET (or alternatively a given static tuning table). The tuning of other chord tones, however, is adapted according to the JI ratios of the underlying harmony.

Figure 7limit-progression-adaptiveJI retunes the example presented previously in Figure 7limit-progression. The figures below the staffs report how the notes are retuned with respect to 31-TET (measured in cents). All chords are in root position in this example, and so the offset values for the bass notes are all 0 cent. The first chord is a harmonic seventh chord (4:5:6:7). In 31-TET, a fifth is flat by 5.2 cent, a major third is sharp by 0.8 cent, and a harmonic seventh is flat by 1.1 cent. The adaptive JI algorithm corrects all these intervals. If a strictly just result is not intended, for example because some slow beating is preferred, then it is also possible to specify how far the static temperament should be adapted towards JI (e.g., only half-way).

## Chord Figuration

The proposed model and its Strasheela implementation supports arbitrary musical textures. While the harmony case studies of the previous section consisted of plain homophonic chord progressions, this section demonstrates how a single chord can be elaborated with chord figurations.

The present case study uses tones of a 7-limit JI chord from La Monte Young's The Well-Tuned Piano, called the Lost Ancestral Lake Region chord {Gann, 1993}. It is a subminor seventh chord with an added major second (ratios 12:14:18:21:27).9 The intervals of the The Well-Tuned Piano are closely approximated by 41-TET, for example, the 41-TET error is very small for 7:4 (-2.97 cent), and extremely small for 3:2 (0.48 cent). This case study therefore uses 41-TET pitch classes.

Figure LostAncestralLakeRegionChord shows the Lost Ancestral Lake Region chord transposed to C. The chord is written in the Extended Helmholtz-Ellis JI Pitch notation (EHE notation), proposed by Marc Sabat and Wolfgang von Schweinitz {Sabat et al., "Three Crystal Growth Algorithms in 23-limit Constrained Harmonic Space", 2008}. This notation indicates Pythagorean tuning (3-limit) with the conventional intervals, and introduces a new accidental for a prominent comma of every new prime limit. The accidental for the septimal comma 64:63 resembles the digit seven (this comma indicates the difference between a Pythagorean seventh – two stacked fourths – and a harmonic seventh). Although EHE notation was originally proposed for JI, it can be used for various temperaments as well, much like Sagittal notation {Secor et al. 2004a}, which has been explicitly designed for this purpose.

%% is there actually any dependency on HarmonicProgressions-41ET.oz?
%  \insert  '~/oz/music/Strasheela/strasheela/trunk/strasheela/examples/Harmony-Examples/HarmonicProgressions-41ET.oz'

declare

{HS.db.setDB ET41.db.fullDB}

SopranoFenv = {Fenv.linearFenv [[0.0 0.0]
[0.1 1.0]
[0.2 0.0]
[0.35 1.0]
[0.5 0.0]
[0.75 1.0]
[1.0 0.0]]}

{GUtils.setRandomGeneratorSeed 0}

[MyScore]
= {SDistro.searchOne
proc {$MyScore} End AkkNo = 20 % TMP. See Limit7Bs constraint below -- depends on this number... Also number etc of LowerLayer notes AkkDur = 4 % TMP %% TMP constant note dur of akkords UpperLayer = {Segs.makeAkkords unit(akkN: AkkNo iargs: unit(n: 2 % chord tones duration: AkkDur inChordB:1 ) rargs: unit(maxPitch: 'G'#5 minPitch: 'G'#3 maxRange: 7#4 % max interval between tones minPcCard: 2 % always two different PCs sopranoPattern: proc {$ Ps}
%% constrain pitch sequence of all upper tones of akkords sequence
{Pattern.fenvContour Ps SopranoFenv}
{Pattern.restrictMaxInterval Ps
{HS.pc 'G'}}
%                                                         {Pattern.undulating Ps unit}
%% too complex together with Pattern.fenvContour
%                                                           {HS.rules.ballistic Ps unit(oppositeIsStep: true)}
{Pattern.noRepetition Ps}
end
bassPattern: proc {$Ps} %% ?? restrict how often it changes direction {Pattern.fenvContour2 Ps SopranoFenv} end rule: proc {$ Akks}
%% At least 50 percent of dyads are 7-limit (i.e. no 3-limit dyads)
Limit7Bs = {Map Akks
fun {$Akk} [N1 N2] = {Akk getItems($)}
in
{HS.rules.isLimit7ConsonanceR {HS.rules.getInterval N1 N2}}
end}
in
%% Ensure that the 7-limit dyads are somewhat evenly distributed (at least 3 in every 5 dyads)
%%
%% TMP: hardcoded total number of Limit7Bs
{ForAll {LUtils.sublists Limit7Bs
[1#5 6#10 11#15 16#20]}
proc {$Bs} {Pattern.percentTrue_Range Bs 50 100} end} end ))} LowerLayer = {Segs.makeCounterpoint unit(iargs: unit(n: AkkNo div 5 inChordB:1 duration: AkkDur * 5 rule: proc {$ Ns}
LastN = {List.last Ns}
%% currently there is only a single chord anyway...
LastC = {LastN findSimultaneousItem($test:HS.score.isChord)} in % {Pattern.noRepetition {Pattern.mapItems Ns getPitch}} {Pattern.decreasing {Pattern.mapItems Ns getPitch}} %% end in chord root {LastN getPitchClass($)} = {LastC getRoot($)} end) rargs: unit(maxPitch: 'B'#3 minPitch: 'C'#3) )} AllNotes in MyScore = {Score.make sim([seq(UpperLayer endTime: End) seq(LowerLayer endTime: End) %% use chord with at least 5 PCs, so there are different options if there should be always 3 sim PCs seq([chord(index:{HS.db.getChordIndex 'lost ancestral lake region'} root:{HS.pc 'C'})] endTime: End)] startTime:0 timeUnit: beats(4)) add(chord: HS.score.chord) % add(chord: HS.score.inversionChord) } AllNotes = {MyScore collect($ test:isNote)}
%%
%% always at least 3 different sim PCs, i.e. there are never unisonos nor octaves
{SMapping.forTimeslices AllNotes
proc {$Ns} {HS.rules.minCard Ns 3} end unit(endTime: End %% NOTE: avoid reapplication of constraint for equal consecutive sets of score object step: AkkDur % ?? should be shortest note dur available.. )} end end HS.distro.leftToRight_TypewiseTieBreaking } {MyScore wait} {RenderLilypondAndCsound_ET41 MyScore unit(file:"JI-example-inMotion-tmp")} % Test declare [Fenv] = {ModuleLink ['x-ozlib://anders/strasheela/Fenv/Fenv.ozf']} %% only direction of fenv segments are relevant for this example {{Fenv.linearFenv [[0.0 0.0] [0.1 1.0] [0.2 0.0] [0.35 1.0] [0.5 0.0] [0.75 1.0] [1.0 0.0]]} plot} Figure JI-figuration shows a solution of the present case study. The texture of this example is represented with a hierarchic music representation, as discussed above. For example, the dyads of the upper staff are represented by a number of simultaneous containers with two notes, which in turn are contained in a sequential container. The chord figuration is shaped by several constraints; the most important constraints are listed below. The sequence of directions of melodic pitch intervals (the pitch contour, {Polansky et al., 1992}) is individually constrained for every part. In the two parts in the upper staff, the pitch contours follow the same given envelope (pitch repetitions are disallowed for the upmost part). The contour of the bass continuously descends, and ends in the chord root. Other constraints control simultaneous pitches. At any time, 3 different pitch classes are present. Also, at least 50 percent of the upper-staff dyads are 7-limit consonances (in other words, the number of 3-limit intervals is restricted). Note that the latter constraint expresses a restriction on JI ratios (i.e. their limit), even though all pitches are internally represented by an ET (41-TET). Such JI-constraints are possible for ETs that uniquely map the corresponding ratios to pitch classes of the ET. 41-TET provides unique pitch classes for many 7-limit intervals. ## Melody This case study looks at 7-limit melody composition. In the proposed approach a melody expresses an implicit harmony, and so this case study makes use of harmony definitions presented before. In addition, it introduces non-harmonic tones and their treatment as well as formal aspects such as motifs. The present study uses a 7-limit scale that Erlich {1998} proposed together with 7 related decatonic scales. Figure StaticSymmetricalMajor shows the static symmetrical major scale, notated in EHE for 22-TET. Note that 22-TET does not temper out the syntonic comma, arrows attached to accidentals indicate a shift by a tempered syntonic comma.10 Instead, a temperament for Erlich's decatonic scales must temper out the two commas of 64:63 and 50:49. For example, the major second of these scales serves as both Pythagorean 9:8 and septimal 8:7 (64:63 comma tempered out), and its tritone represents both 7:5 and 10:7 (50:49 comma vanished). Temperaments that temper out these commas are called Pajara on the Alternate Tunings Mailing List, and Strasheela's tuning table can be set to such a temperament. Instead, we are using 22-TET for this case study, which also tempers out these commas (and other commas as well). This scale generalizes several properties found in the wellknown diatonic scales for the 7-limit. For example, there are only two different step sizes: a small step (marked s in Figure StaticSymmetricalMajor; pitch class interval 2 in 22-TET), and a large step (marked L, pitch class interval 3). The sequence of s and L explains why this scale is called symmetrical. Like diatonic scales are constructed from 2 tetrachords that subdivide a fourth (4:3) into four tones {Chalmers, 1993}, this scale is constructed from 2 pentachords'' (marked by brackets) that subdivide a fourth into five tones (however, these pentachords are a tritone away from each other). Finally, consonant 7-limit tetrads that only consist of scale tones can be constructed on all but two scale degrees (the remaining two degrees carry augmented triads). %% TODO: remove all dependencies to other files: copy these defs into this file (or some file in same dir as this file for sharing with other examples) % \insert '~/oz/music/Strasheela/strasheela/trunk/strasheela/examples/Counterpoint-Examples/Counterpoint-22ET.oz' declare [Segs ET22 Fenv] = {ModuleLink ['x-ozlib://anders/strasheela/Segments/Segments.ozf' 'x-ozlib://anders/strasheela/ET22/ET22.ozf' 'x-ozlib://anders/strasheela/Fenv/Fenv.ozf']} %% set accidentalOffset high enough for chord degree accidentals of non-chord tones {HS.db.setDB {Adjoin ET22.db.fullDB unit(accidentalOffset: 7)}} % {HS.pc 'E\\'} % {HS.db.setDB ET22.db.fullDB} {Init.setTempo 80.0} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% Rhythm representation %% %% Symbolic duration names: Note durations are then written as %% follows: D.d16 (16th note), D.d8 (eighth note) and so forth, D.d8_ %% (dotted eighth note). See doc of MUtils.makeNoteLengthsTable for %% more details. Beat = 4 * 3 D = {MUtils.makeNoteLengthsRecord Beat [3]} /** %% Function expecting a symbolic duration name and returning the corresponding numeric duration. %% */ fun {SymbolicDurToInt Spec} D.Spec end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% Harmony defs %% /** %% %% %% Args.rargs %% 'firstRoot' (default false): root of first chord (pc atom) %% 'firstToLastRootInterval' (default false): pc interval between first and last chord root (pc atom, e.g., 'C' is 0, or false). %% 'lastRoot' (default false): root of last chord (pc atom, or false) %% 'firstType' / 'lastType' (default false): sets the type (index) of the first/last chord in Chords to the type specified, an atom (chord name specified in the database). Disabled if false. %% %% %% */ %% %% %% 'howOftenRoot' (default false): record of args unit(pc:ET31_PC min:MinPercent max:MaxPercent): constraints percentage how often given pitch class occurs as a root in chord progression. %% Args from super script MakeSchoenbergianProgression %% %% MakeChords_22ETCounterpoint = {Score.defSubscript unit(super:HS.score.makeChords %% diatonic chord with fd args idefaults: unit(constructor: {Score.makeConstructor HS.score.diatonicChord unit} inScaleB: 1) rdefaults: unit(progressionSelector: resolveDescendingProgressions() )) proc {$ Chords Args}
{HS.rules.setBoundaryRoots Chords Args.rargs}
{HS.rules.setBoundaryTypes Chords Args.rargs}
{HS.rules.schoenberg.progressionSelector Chords Args.rargs.progressionSelector}
end}

/** %% The interval between N1 and N1 is in [1, whole tone raised by a syntonic comma].
%% */
fun {IsStepR N1 N2}
{HS.rules.isStepR {N1 getPitch($)} {N2 getPitch($)}
{HS.pc 'D/'}
%     {HS.pc 'E'}
%     {HS.pc 'D#\\'}
}
end
proc {IsStep N1 N2}
{HS.rules.isStep {N1 getPitch($)} {N2 getPitch($)}
{HS.pc 'D/'}
%     {HS.pc 'E'}
%     {HS.pc 'D#\\'}
}
end

/** %%
%%
%%
%% */
Motif_A_Ns
= {Segs.tSC.defSubscript
unit(super: Score.makeItems_iargs
mixins: [Segs.makeCounterpoint_Mixin]
%% Motif features
motif: unit(%% explicit number of notes to avoid any ambiguity
%% (e.g., pitchContour has less elements)
n: 6
%% 5 notes specified
durations: [D.d4 D.d4 D.d8 D.d4 D.d2 D.d2]
#fun {$Ns} {Pattern.mapItems Ns getDuration} end % %% one less element than durations pitchContour: [2 2 2 2 0] #fun {$ Ns}
{Pattern.map2Neighbours {Pattern.mapItems Ns getPitch}
Pattern.direction}
end
isStep: [0 1 1 1 0]
#fun {$Ns} {Pattern.map2Neighbours Ns fun {$ N1 N2} {IsStepR N1 N2} end}
end
)
transformers: [Segs.tSC.removeShortNotes]
idefaults: unit(%% Set note class and add DomSpec support
constructor: {Score.makeConstructor HS.score.note
unit(inChordB: fd#(0#1))}
inScaleB: 1
rule: proc {$Ns} {HS.rules.onlyOrnamentalDissonance_Durations Ns} end) ) nil % Body } %% wrap seq around and set proper args fun {Motif_A Args} Default = unit(rargs: unit(maxPitch: 'G'#5 minPitch: 'G'#3 maxInterval: 8#5 % step:8#7 maxNonharmonicNoteSequence: 1 % minPercentSteps: 60 )) Notes = {Motif_A_Ns {GUtils.recursiveAdjoin Default Args}} in {Score.make2 {Adjoin {Record.subtractList Args [rargs iargs]} seq(Notes)} unit} end %% TODO: rhythm for this motif Motif_B = {Score.defSubscript unit(super: Segs.makeCounterpoint_Seq mixins: [Segs.hook] idefaults: unit(n: 5 offsetTime: each#[D.d8 0 0 0 0] duration: each#[D.d8 D.d8 D.d8 D.d4 D.d2] inScaleB: 1 rule: proc {$ Ns}
{HS.rules.onlyOrnamentalDissonance_Durations Ns}
{Pattern.for2Neighbours Ns
proc {$N1 N2} {IsStep N1 N2} end} end) rdefaults: unit(maxPitch: 'G'#5 minPitch: 'G'#3 maxInterval: 8#5 % step:8#7 maxNonharmonicNoteSequence: 1 % minPercentSteps: 60 oppositeDir: 2 % last interval goes up )) nil} {GUtils.setRandomGeneratorSeed 0} [MyScore] = {SDistro.searchOne proc {$ MyScore}
ChordNo = 4 % depends on number of motifs with info-tag startChord (see below)
%     ChordNo = 5 % depends on number of motifs with info-tag startChord (see below)
Chords = {MakeChords_22ETCounterpoint
unit(iargs: unit(n: ChordNo
%% only specific chord roots permitted
root: fd#[{HS.pc 'C'} {HS.pc 'F'} {HS.pc 'G'}])
rargs: unit(types: ['harmonic 7th'
'subharmonic 6th']
firstRoot: 'C'
lastRoot: 'C'
progressionSelector: resolveDescendingProgressions(allowInterchangeProgression: true)
))}
MotifSeq
End
fun {GetMaxMotifPitch MyMotif}
{Pattern.max {MyMotif mapItems($getPitch test:isNote)}} end %% Ps is list of loc max pitch of each motif. Contour follows fenv, and max interval is second proc {LocalMaxPattern Ps} % {Pattern.fenvContour Ps % {Fenv.linearFenv [[0.0 0.0] [0.8 1.0] [1.0 0.0]]}} % {Pattern.increasing Ps} {Pattern.restrictMaxInterval Ps {HS.pc 'D#\\'}} end in %% TODO: find some automatic way to enter bar lines.. MyScore = {Score.make sim(info: lily("\\cadenzaOn") [seq(handle: MotifSeq %% number of motifs with info startChord must match ChordNo [seq([motif_a(info: startChord) motif_a(rargs:unit(removeShortNotes: 1)) motif_b pause(duration:D.d2) ]) seq([motif_a(info: startChord) motif_a(rargs:unit(removeShortNotes: 1)) % motif_a(rargs:unit(removeShortNotes: 2)) motif_a(rargs:unit(removeShortNotes: 3)) motif_a(rargs:unit(removeShortNotes: 4)) ]) seq([motif_b(info: startChord) motif_b pause(duration:D.d2) ]) seq([motif_a(info: startChord)]) ] endTime:End) %% notes are implicitly related to simultaneous chord and scale seq(Chords endTime: End) seq([scale(index:{HS.db.getScaleIndex 'static symmetrical major'} % index:{HS.db.getScaleIndex 'standard pentachordal major'} transposition:0)] endTime:End)] startTime:0 timeUnit:beats(Beat)) add(motif_a: Motif_A %% unused so far motif_b: Motif_B chord:HS.score.chord scale:HS.score.scale)} %% add bar lines to all but the first motif (with current lily tag %% implementation, these bar lines are always placed *before* the %% motif) {ForAll {MyScore collect($ test: fun {$X} {X isContainer($)} andthen {All {X mapItems($isNote)} GUtils.identity} end)}.2 proc {$ MyMotif} {MyMotif addInfo(lily("\\ibar"))} end}
%%
{HS.score.harmonicRhythmFollowsMarkers MyScore Chords unit}
%%
%% Further constraints
%%
%% TMP comment: possibly requires changing motif b: allow for small skips
{ForAll Chords HS.rules.expressAllChordPCs}
%%
%% NOTE: this constraint can cause much search, because it is
%% applied very late (max motif pitches are known very late)
{ForAll {MotifSeq getItems($)} proc {$ SubMotifseq}
{LocalMaxPattern {SubMotifseq mapItems($GetMaxMotifPitch test:isSequential)}} end} end HS.distro.leftToRight_TypewiseTieBreaking % HS.distro.typewise_LeftToRightTieBreaking } {MyScore wait} {RenderLilypondAndCsound_ET22_Unmetered MyScore unit(file:"decatonic-melody-tmp")} Figure decatonic-melody shows a melody solution. For clarity, its implicit harmony is notated on the second staff. This staff depicts the actual chord objects: chord pitch class sets are notated as grace notes and chord roots as normal'' notes that also indicate the duration of the underlying harmony. The underlying harmony forms quasi a plagal cadence of 7-limit tetrads, which only use tones of the symmetrical major scale. Consecutive chords are connected by common tones. Although the underlying harmonic structure resembles a traditional cadence, the chords and even more so the scale employed is certainly not conventional. It was therefore important to add constraints that ensure the harmonic clarity. For example, all tones of the underlying harmony are present in the melody. Further, the melody features non-harmonic tones (marked by crosses), but strict constraints ensure that such tones do not affect the harmonic clarity. A non-harmonic tone cannot follow another non-harmonic tone; and they are always stepwise approached and resolved (this particular solution shows only passing tones). The ornamental character of non-harmonic tones is further safeguarded by a constraint that takes note durations into account: a non-harmonic note must be preceded and followed by a note that is at least as long as the non-harmonic note itself. Compared with disciplines like harmony and counterpoint, melody composition has been addressed far less frequently in the literature. An important reason for this difference is likely, that melody composition can be less formalized than harmony. In the proposed approach we therefore do not try to fully formalize melody composition either. Instead, important motivic aspects are defined manually, but the actual melody pitches and the implicit harmony is found by the computer {Anders et al., "Interfacing Manual and Machine Composition", 2009}. The melody of this case study is constructed from 2 motifs, for which specific features are composed manually. For example, note durations and the pitch contour (interval directions) are given for motif a (e.g., bar 1 in Figure decatonic-melody). The motif declaration also states where skips and where steps occur in the motif. This declaration still allows for considerable flexibility: motifs can be transposed freely and the actual size of skips and steps is variable as well. In addition, motif variations are defined by a function that changes the motif declaration (e.g. the durations and the contour): the variation used here removes one or more of the shortest motif notes (e.g., compare bars 4–7). Technically, motifs have been implemented as sub-CSPs and the full CSP has been defined by assembling these sub-CSPs in time using Strasheela's temporal containers. Obviously, a faster harmonic rhythm is possible by preserving the harmonic clarity, if we add an accompaniment, which also allows to unambiguously present further chords, for example, the subminor 7th (12:14:18:21) as in Figure decatonic-melody-2.11 A larger set of possible chords also allows for a more refined theory of harmony: in this case only ascending progressions are permitted (see the 31-TET case study above). Also, further melodic constraints can be applied with such a more flexible pitch set: in this CSP all intervals between the melodic peaks of motifs are constrained to be steps upward. %% TODO: remove all dependencies to other files: copy these defs into this file (or some file in same dir as this file for sharing with other examples) % \insert '~/oz/music/Strasheela/strasheela/trunk/strasheela/examples/Counterpoint-Examples/Counterpoint-22ET.oz' declare [Segs ET22 Fenv] = {ModuleLink ['x-ozlib://anders/strasheela/Segments/Segments.ozf' 'x-ozlib://anders/strasheela/ET22/ET22.ozf' 'x-ozlib://anders/strasheela/Fenv/Fenv.ozf']} %% set accidentalOffset high enough for chord degree accidentals of non-chord tones {HS.db.setDB {Adjoin ET22.db.fullDB unit(accidentalOffset: 7)}} % {HS.pc 'E\\'} % {HS.db.setDB ET22.db.fullDB} {Init.setTempo 80.0} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% Rhythm representation %% %% Symbolic duration names: Note durations are then written as %% follows: D.d16 (16th note), D.d8 (eighth note) and so forth, D.d8_ %% (dotted eighth note). See doc of MUtils.makeNoteLengthsTable for %% more details. Beat = 4 * 3 D = {MUtils.makeNoteLengthsRecord Beat [3]} /** %% Function expecting a symbolic duration name and returning the corresponding numeric duration. %% */ fun {SymbolicDurToInt Spec} D.Spec end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% Harmony defs %% /** %% %% %% Args.rargs %% 'firstRoot' (default false): root of first chord (pc atom) %% 'firstToLastRootInterval' (default false): pc interval between first and last chord root (pc atom, e.g., 'C' is 0, or false). %% 'lastRoot' (default false): root of last chord (pc atom, or false) %% 'firstType' / 'lastType' (default false): sets the type (index) of the first/last chord in Chords to the type specified, an atom (chord name specified in the database). Disabled if false. %% %% %% */ %% %% %% 'howOftenRoot' (default false): record of args unit(pc:ET31_PC min:MinPercent max:MaxPercent): constraints percentage how often given pitch class occurs as a root in chord progression. %% Args from super script MakeSchoenbergianProgression %% %% MakeChords_22ETCounterpoint = {Score.defSubscript unit(super:HS.score.makeChords %% diatonic chord with fd args idefaults: unit(constructor: {Score.makeConstructor HS.score.diatonicChord unit} inScaleB: 1) rdefaults: unit(progressionSelector: resolveDescendingProgressions() )) proc {$ Chords Args}
{HS.rules.setBoundaryRoots Chords Args.rargs}
{HS.rules.setBoundaryTypes Chords Args.rargs}
{HS.rules.schoenberg.progressionSelector Chords Args.rargs.progressionSelector}
end}

/** %% The interval between N1 and N1 is in [1, whole tone raised by a syntonic comma].
%% */
fun {IsStepR N1 N2}
{HS.rules.isStepR {N1 getPitch($)} {N2 getPitch($)}
{HS.pc 'D/'}
%     {HS.pc 'E'}
%     {HS.pc 'D#\\'}
}
end
proc {IsStep N1 N2}
{HS.rules.isStep {N1 getPitch($)} {N2 getPitch($)}
{HS.pc 'D/'}
%     {HS.pc 'E'}
%     {HS.pc 'D#\\'}
}
end

/** %%
%%
%%
%% */
Motif_A_Ns
= {Segs.tSC.defSubscript
unit(super: Score.makeItems_iargs
mixins: [Segs.makeCounterpoint_Mixin]
%% Motif features
motif: unit(%% explicit number of notes to avoid any ambiguity
%% (e.g., pitchContour has less elements)
n: 6
%% 5 notes specified
durations: [D.d4 D.d4 D.d8 D.d4 D.d2 D.d2]
#fun {$Ns} {Pattern.mapItems Ns getDuration} end % %% one less element than durations pitchContour: [2 2 2 2 0] #fun {$ Ns}
{Pattern.map2Neighbours {Pattern.mapItems Ns getPitch}
Pattern.direction}
end
isStep: [0 1 1 1 0]
#fun {$Ns} {Pattern.map2Neighbours Ns fun {$ N1 N2} {IsStepR N1 N2} end}
end
)
transformers: [Segs.tSC.removeShortNotes]
idefaults: unit(%% Set note class and add DomSpec support
constructor: {Score.makeConstructor HS.score.note
unit(inChordB: fd#(0#1))}
inScaleB: 1
rule: proc {$Ns} {HS.rules.onlyOrnamentalDissonance_Durations Ns} end) ) nil % Body } Motif_A_Type = {NewName} %% wrap seq around and set proper args proc {Motif_A Args ?MyScore} Default = unit(rargs: unit(maxPitch: 'G'#5 minPitch: 'G'#3 maxInterval: 8#5 % step:8#7 maxNonharmonicNoteSequence: 1 % minPercentSteps: 60 )) Notes = {Motif_A_Ns {GUtils.recursiveAdjoin Default Args}} in MyScore = {Score.make2 {Adjoin {Record.subtractList Args [rargs iargs]} seq(Notes)} unit} %% TMP comment % {MyScore addInfo(Motif_A_Type)} end % %% defined "manually" here instead of using arg 'isMotif' of Segs.tSC.defSubscript, because Motif_A_Ns returns notes and not a container (which could be changed...) % fun {IsMotif_A X} % {Score.isScoreObject X} andthen {X hasThisInfo($ Motif_A_Type)}
%       end
IsMotif_B
Motif_B
= {Score.defSubscript
unit(super: Segs.makeCounterpoint_Seq
mixins: [Segs.hook]
idefaults: unit(n: 5
offsetTime: each#[D.d8 0 0 0 0]
duration: each#[D.d8 D.d8 D.d8 D.d4 D.d2]
inScaleB: 1
rule: proc {$Ns} {HS.rules.onlyOrnamentalDissonance_Durations Ns} {Pattern.for2Neighbours Ns proc {$ N1 N2} {IsStep N1 N2} end}
end)
rdefaults: unit(maxPitch: 'G'#5
minPitch: 'G'#3
maxInterval: 8#5
%                       step:8#7
maxNonharmonicNoteSequence: 1
%                       minPercentSteps: 60
oppositeDir: 2 % last interval goes up
)
%% TMP comment
isMotif: IsMotif_B
)
nil}

/** %% Returns list of akkords.
%% */
fun {Accompaniment Args}
Defaults = unit(akkN: 4 % ??
%% 2 sim chord notes at a time
iargs: unit(n: 2
inChordB: 1
duration: D.d8)
rargs: unit(maxPitch: 'D'#4
minPitch: 'A'#2
maxRange: 5#4
%                                  minPcCard: 2
sopranoPattern: proc {$Ps} N = 2 in {Pattern.cycle Ps N} {FD.distinct {List.take Ps N}} end % bassPattern % rule )) in {Segs.makeAkkords_Seq {Adjoin Defaults Args}} end {GUtils.setRandomGeneratorSeed 0} [MyScore] = {SDistro.searchOne % {SDistro.exploreOne proc {$ MyScore}
ChordNo = 3 % depends on number of motifs with info-tag startChord (see below)
Chords = {MakeChords_22ETCounterpoint
unit(iargs: unit(n: ChordNo)
rargs: unit(types: ['harmonic 7th'
'subharmonic 6th'
'subminor major 6th'
'subminor 7th'
]
firstRoot: 'C'
%                                                  lastRoot: 'C'
%                                                progressionSelector: resolveDescendingProgressions(allowInterchangeProgression: true)
%% no solution of ascending procession with only 'harmonic 7th' and 'subharmonic 6th'!
progressionSelector: ascending
%                                                 progressionSelector: harmonicBand
))}
MotifSeq
End
fun {GetMaxMotifPitch MyMotif}
{Pattern.max {MyMotif mapItems($getPitch test:isNote)}} end %% Ps is list of loc max pitch of each motif. Contour follows fenv, and max interval is second proc {LocalMaxPattern Ps} % {Pattern.fenvContour Ps % {Fenv.linearFenv [[0.0 0.0] [0.7 1.0] [1.0 0.0]]}} {Pattern.increasing Ps} {Pattern.restrictMaxInterval Ps {HS.pc 'D/'}} end AllMelodyMotifs in %% TODO: find some automatic way to enter bar lines.. MyScore = {Score.make sim(info: lily("\\cadenzaOn") [seq(handle: MotifSeq %% number of motifs with info startChord must match ChordNo [sim([motif_a(info:melMotif) accompaniment(offsetTime: D.d4 akkN:3)]) sim([motif_a(info:melMotif) accompaniment(offsetTime: D.d4 akkN:3)]) sim([motif_b(info:melMotif) accompaniment(offsetTime: D.d2 akkN:3)]) pause(duration:D.d2) ] endTime:End) %% notes are implicitly related to simultaneous chord and scale seq(Chords endTime: End) seq([scale(index:{HS.db.getScaleIndex 'static symmetrical major'} % index:{HS.db.getScaleIndex 'standard pentachordal major'} transposition:0)] endTime:End)] startTime:0 timeUnit:beats(Beat)) add(motif_a: Motif_A %% unused so far motif_b: Motif_B accompaniment: Accompaniment chord:HS.score.chord scale:HS.score.scale)} %% AllMelodyMotifs = {MyScore collect($ test: fun {$X} {X hasThisInfo($ melMotif)}
end)}
%% With each melodic motif starts a new chord
{ForAll AllMelodyMotifs
proc {$MyMotif} {MyMotif addInfo(startChord)} end} %% add bar lines to all but the first motif (with current lily tag %% implementation, these bar lines are always placed *before* the %% motif) {ForAll AllMelodyMotifs.2 proc {$ MyMotif} {MyMotif addInfo(lily("\\ibar"))} end}
%%
{HS.score.harmonicRhythmFollowsMarkers MyScore Chords unit}
%%
%% Further constraints
%%
{ForAll Chords HS.rules.expressAllChordPCs}
%%
%% NOTE: this constraint can cause much search, because it is
%% applied very late (max motif pitches are known very late)
%% TMP comment (BUG: causes blocking)
{LocalMaxPattern {Map AllMelodyMotifs GetMaxMotifPitch}}
end
HS.distro.leftToRight_TypewiseTieBreaking
%  HS.distro.typewise_LeftToRightTieBreaking
}
{MyScore wait}
{RenderLilypondAndCsound_ET22_Unmetered MyScore
unit(file:"decatonic-melody-2-tmp")}

## Counterpoint

Finally, we model 2-part microtonal counterpoint. Like the previous case study, this section uses the 7-limit scale static symmetrical major (Figure StaticSymmetricalMajor), and is tuned in 22-TET.

The study implements harmonic counterpoint'': the contrapuntal lines express an underlying harmonic structure (as Baroque counterpoint does, in contrast to Renaissance counterpoint). This case study thus again draws on microtonal harmony definitions discussed before.12

Figure decatonic-counterpoint shows a solution, the underlying harmony is explicitly notated as explained above. The music representation consists of two sequential containers with notes, and a chord sequence, all contained in a simultaneous container.

%% TODO: remove all dependencies to other files: copy these defs into this file (or some file in same dir as this file for sharing with other examples)
%   \insert '~/oz/music/Strasheela/strasheela/trunk/strasheela/examples/Counterpoint-Examples/Counterpoint-22ET.oz'

%%
%% NOTE: this CSP can cause much search or not -- it perhaps depends on whether a suitable rhythmical structure was found in the beginning
%%

declare

[Segs ET22 Fenv] = {ModuleLink ['x-ozlib://anders/strasheela/Segments/Segments.ozf'
'x-ozlib://anders/strasheela/ET22/ET22.ozf'
'x-ozlib://anders/strasheela/Fenv/Fenv.ozf']}
%% set accidentalOffset high enough for chord degree accidentals of non-chord tones
unit(accidentalOffset: 7)}} % {HS.pc 'E\\'}
% {HS.db.setDB ET22.db.fullDB}

{Init.setTempo 80.0}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% Rhythm representation
%%

%% Symbolic duration names: Note durations are then written as
%% follows: D.d16 (16th note), D.d8 (eighth note) and so forth, D.d8_
%% (dotted eighth note). See doc of MUtils.makeNoteLengthsTable for
%% more details.
Beat = 4 * 3
D = {MUtils.makeNoteLengthsRecord Beat [3]}
/** %% Function expecting a symbolic duration name and returning the corresponding numeric duration.
%% */
fun {SymbolicDurToInt Spec} D.Spec end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% Harmony defs
%%

/** %%
%%
%% Args.rargs
%% 'firstRoot' (default false): root of first chord (pc atom)
%% 'firstToLastRootInterval' (default false): pc interval between first and last chord root (pc atom, e.g., 'C' is 0, or false).
%% 'lastRoot' (default false): root of last chord (pc atom, or false)
%% 'firstType' / 'lastType' (default false): sets the type (index) of the first/last chord in Chords to the type specified, an atom (chord name specified in the database). Disabled if false.
%%
%%
%% */
%%
%%
%% 'howOftenRoot' (default false): record of args unit(pc:ET31_PC min:MinPercent max:MaxPercent): constraints percentage how often given pitch class occurs as a root in chord progression.
%% Args from super script MakeSchoenbergianProgression
%%
%%
MakeChords_22ETCounterpoint
= {Score.defSubscript
unit(super:HS.score.makeChords
%% diatonic chord with fd args
idefaults: unit(constructor: {Score.makeConstructor HS.score.diatonicChord
unit}
inScaleB: 1)
rdefaults: unit(progressionSelector: resolveDescendingProgressions()
))
proc {$Chords Args} {HS.rules.setBoundaryRoots Chords Args.rargs} {HS.rules.setBoundaryTypes Chords Args.rargs} {HS.rules.schoenberg.progressionSelector Chords Args.rargs.progressionSelector} end} /** %% Any local pitch minima of Ns (list of HS.score.chordDegreeMixinForNote instances) must be either chord-tones with either chord degree 1 (root) or 2 ("third"). This constraint is intended for the bass in order to improve the harmonic clarity. %% %% Note: this constraint is an over-simplified version (e.g., it does not allow for situations like cambiata), but it is already better than simply requiring only roots in bass... %% */ %% Alternative definition: lowest bass tone per chord must be either chord degree 1 (root) or 2 (third?). [this bass tone might be "too late" in chord] %% %% Concerning getChordAccidentals: I have chord tones and non-chord tones. For chord tones, the accidental is 0. % {N getChordAccidental($)}
proc {RestrictChordDegrees_Bass Ns}
%% N is a chord tone with either chord degree 1 or 2 (root or third)
fun {IsProperDegree N}
{FD.conj {N getInChordB($)} {FS.reified.include {N getChordDegree($)} {GUtils.intsToFS [1 2]}}}
end
in
{Pattern.forNeighbours Ns 3
proc {$[N1 N2 N3]} IsLocalMin = {Pattern.localMinR {N1 getPitch($)} {N2 getPitch($)} {N3 getPitch($)}}
in
{FD.impl IsLocalMin
{IsProperDegree N2}
1}
end}
%% first and last note
{IsProperDegree Ns.1} = 1
{IsProperDegree {List.last Ns}} = 1
end

/* %% After a skip larger than 'maxStep' (default 5#4) there should be no skip in opposite dur.
%% */
proc {RestrictSkips Pitches Args}
Default = unit(maxStep: 5#4)
MaxStep = {FloatToInt {MUtils.ratioToKeynumInterval As.maxStep
{IntToFloat {HS.db.getPitchesPerOctave}}}}
in
{Pattern.forNeighbours Pitches 3
proc {$[P1 P2 P3]} Dist1 = {FD.decl} Dist2 = {FD.decl} Dir1 = {Pattern.direction P1 P2} Dir2 = {Pattern.direction P2 P3} in Dist1 = {FD.distance P1 P2 '=:'} Dist2 = {FD.distance P2 P3 '=:'} %% in case of a large skip {FD.impl (Dist1 >: MaxStep) {FD.nega {FD.conj (Dist2 >: MaxStep) (Dir1 \=: Dir2)}} 1} end} end /* %% Only chord tones on string beats %% */ proc {ChordToneOnStrongBeat MyMeasure Ns} {ForAll Ns proc {$ N}
{FD.impl {MyMeasure onAccentR(${N getStartTime($)})}
{N getInChordB($)} 1} end} end /** %% No syncopation over bar lines. MyMeasure is a uniform measure object, Ns is list of notes. %% */ proc {NoSyncopationOverBarlines MyMeasure Ns} {ForAll Ns proc {$ N}
{MyMeasure overlapsBarlineR(${N getStartTime($)} {N getEndTime($)})} = 0 end} end /** %% Duration of notes at the beginning of a bar is a least a quarter note. %% */ proc {StartBarWithLongNote MyMeasure Ns} {ForAll Ns proc {$ N}
{FD.impl {MyMeasure onMeasureStartR(${N getStartTime($)})}
({N getDuration($)} >=: D.d4) 1} end} end /* %% Pattern constraint on local max of Ns pitches: max a third away from each other and increasing. %% */ proc {LocalMaxPattern Ns} LocalMax in thread LocalMax = {Pattern.getLocalMax {Map Ns fun {$ N} {N getPitch($)} end}} end thread {Pattern.for2Neighbours LocalMax proc {$ P1 P2}
{FD.distance P1 P2 '=<:' {HS.pc 'E'}}
P1 <: P2
end}
end
end

/** %% The cardiality of the set of pitchclasses of Notes (list of HS.score.note objects) is at least Card (FD int).
%% */
proc {MinCard Notes Card}
PC_FS = {GUtils.intsToFS {Pattern.mapItems Notes getPitchClass}}
AuxCard = {FD.decl}
in
AuxCard = {FS.card PC_FS}
AuxCard >=: Card
end

/** %% 5-limit and 7-limit consonant pitch class intervals.
%% no prime, fifths, octaves to avoid parallels
%% */
ConsonancePCs
= {Map [8#7 7#6 6#5 5#4 8#5 5#3 12#7 7#4] HS.score.ratioToInterval}
/** %% 5-limit and 7-limit consonant intervals over two octaves.
%% */
fun {MakeConsonancePCs_multipleOctaves OctaveNo}
{LUtils.accum
{Map {List.number 0 OctaveNo-1 1}
fun {$I} {Map ConsonancePCs fun {$ PC} PC + I*{HS.db.getPitchesPerOctave} end} end}
Append}
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%
%% Solver call
%%

%%
%% If search is too complex for larger score, then create full result from subsections (e.g. two chords at a time)
%%
{GUtils.setRandomGeneratorSeed 0}
/** %% Variant of Segs.makeCounterpoint that predefines new default args.
%% */
fun {MakeVoiceNotes Args}
Defaults = unit(iargs: unit(inScaleB: 1 % only scale tones
%                              duration: fd#[D.d8 D.d4 D.d2]
%                              offsetTime: fd#[0 D.d4 D.d2]
)
rargs: unit(maxInterval: 2#1
maxNonharmonicNoteSequence: 1
%% hm, likely makes search more complex
%                              minPercentSteps: 60
))
in
{Segs.makeCounterpoint
end
[MyScore] = {SDistro.searchOne
proc {$MyScore} ChordNo = 2 % ChordNo = 5 End VoiceNs1 = {MakeVoiceNotes unit(iargs: unit(n: ChordNo*8 % depends also on duration duration: fd#[D.d8 D.d4 D.d2] ) rargs: unit(maxPitch: 'F'#5 % pitch unit and notation is et22 minPitch: 'A'#3))} VoiceNs2 = {MakeVoiceNotes unit(iargs: unit(%% this constructor only for the bass? constructor: {Score.makeConstructor HS.score.chordDegreeNote unit} n: ChordNo*8 duration: fd#[D.d8 D.d4 D.d2] ) rargs: unit(maxPitch: 'D'#4 minPitch: 'E'#2 % minPercentSteps: false ))} Chords = {MakeChords_22ETCounterpoint unit(iargs: unit(n: ChordNo duration: D.d1 * 2) rargs: unit(types: ['harmonic 7th' 'subharmonic 6th'] firstRoot: 'C' % lastRoot: 'C' ))} MyMeasure = {Score.make2 measure(beatNumber:4 %% 4/4 beat beatDuration:D.d4 endTime:End) unit(measure:Measure.uniformMeasures)} AllNotes in MyScore = {Score.make sim([seq(VoiceNs1 endTime:End) seq(info:lily("\\clef bass") VoiceNs2 endTime:End) %% notes are implicitly related to simultaneous chords and scale seq(Chords endTime:End) seq([scale(index:{HS.db.getScaleIndex 'static symmetrical major'} % index:{HS.db.getScaleIndex 'standard pentachordal major'} transposition: {ET22.pc 'C'} endTime:End)]) MyMeasure ] startTime:0 timeUnit:beats(Beat)) add(chord:HS.score.chord scale:HS.score.scale)} AllNotes = {MyScore collect($ test:isNote)}
%%
%% Constraints
%%
{ForAll Chords HS.rules.expressEssentialChordPCs}
{RestrictChordDegrees_Bass VoiceNs2}
{ForAll [VoiceNs1 VoiceNs2]
proc {$Ns} Ps = {Pattern.mapItems Ns getPitch} in %% restrict non-harmonic tones (suspension etc.) % {HS.rules.clearHarmonyAtChordBoundaries Chords Ns} {RestrictSkips Ps unit} {ChordToneOnStrongBeat MyMeasure Ns} {HS.rules.clearDissonanceResolution Ns} {Pattern.noRepetition Ps} % no direct pitch repetition {HS.rules.onlyOrnamentalDissonance_Durations Ns} {Pattern.undulating Ps unit(min:3 max: 8)} %% TMP: I would like something less strict: after skip > third no skip in opposite dir {HS.rules.ballistic Ps unit(oppositeIsStep: true)} {NoSyncopationOverBarlines MyMeasure Ns} {StartBarWithLongNote MyMeasure Ns} end} {LocalMaxPattern VoiceNs1} %% %% Always at least 2 different sim PCs %% ?? is this acually effective? It appears sometimes not, but mostly it is fine. thread {SMapping.forTimeslices AllNotes proc {$ Ns} {MinCard Ns 2} end
unit(endTime: End
%% NOTE: avoid reapplication of constraint for equal consecutive sets of score object
step: D.d8         % ?? should be shortest note dur available..
)}
end
%% Non-chord tones are consonant to each other.
%% seems to make search problem more complex, it is not really required
{HS.rules.intervalBetweenNonharmonicTonesIsConsonant AllNotes
{MakeConsonancePCs_multipleOctaves 2}}
end
{HS.rules.noParallels2 AllNotes unit}
end
%% first determine rhythmic structure (otherwise search realises too late that voices do not end together and with chords). Fixed number of notes likely a problem.
HS.distro.typewise_LeftToRightTieBreaking
%  HS.distro.leftToRight_TypewiseTieBreaking
}
{MyScore wait}
{RenderLilypondAndCsound_ET22 MyScore
unit(file:"decatonic-counterpoint-tmp")}

The constraints on the underlying harmony of the first melody example are in force again (consonant tetrads, only using scale pitch classes and chords are connected by common pitch classes), complemented by a few further harmonic constraints. At any time, 2 different pitch classes are played. On a strong beat, only chord tones must be played. Also, harmony is restricted to root positions or second inversions, which has been implemented by constraints between chord degrees and bass notes. A local minimum of the pitches in the bass must be either chord degree 1 (i.e. the root) or 2 (the third of the two possible chords harmonic seventh or subharmonic seventh). For example, the second note in bar 1 of Figure decatonic-counterpoint is a local minimum, and it is the third of the underlying harmonic seventh chord over C.

The melody is constrained as follows. Melodic intervals form a ballistic curve'': a skip up (down) is followed by a smaller (larger) skip/step in the same direction or a step into the opposite direction. No direct pitch repetitions are permitted, and the pitch contour must undulate, i.e. the number of intervals going in the same direction is restricted by a lower bound (here 3) and an upper bound (8). Nevertheless, directly after the first and before the last note a direction change is permitted as well. The treatment of non-harmonic notes is restricted as in the melody case study above, but a few additional constraints are applied to improve the harmonic clarity. For example, simultaneous non-harmonic tones must be consonant to each other. Also, if one voice resolves a dissonance (a non-harmonic tone, again marked by crosses), then the other voice cannot start a new dissonance at the same time and that way masks the dissonance resolution (in the solution shown, it so happens that no simultaneous dissonances occur). Besides, neither open not hidden parallels of perfect consonances are permitted.

Some constraints on the rhythm have been applied as well. However, as this paper addresses microtonal pitches, the rhythmic structure is very simple, and only few rhythmic constraints have been used. The domain for all note durations consists of the note values half note, quarter note and eighth note, and a 4/4 meter is set. In order to make the metric structure more clear, syncopations over bar lines are not allowed, and the first note value of a bar must be at least a quarter note.

## Summary and Discussion

This paper presented a computational model for microtonal music theories and composition that makes use of the constraint programming paradigm. The fundamental layer of this model is its pitch representation, which introduces variables for pitches, pitch classes, and (chord or scale) degrees. This pitch representation supports arbitrary equal temperaments (ET). We proposed constraints that define the relation between these representations; as well as define transpositions for each representation. Further, we proposed a constrainable music representation for higher level pitch-related concepts such as chord and scale objects.

The model has been implemented in Strasheela, so that this model can be used together with other Strasheela features. For example, Strasheela's constrainable representation of temporal score object hierarchies is available.

This paper demonstrated the proposed model in a number of case studies that implement microtonal theories of harmony, melody and counterpoint. These case studies also showed how the model supports various equal temperaments. We modeled a diatonic cadence in 12-tone equal temperament (12-TET); a 7-limit harmony progression in 31-TET and adaptive just intonation (JI); a chord figuration of a chord from La Monte Young's The Well-Tuned Piano in 41-TET; and finally a melody and harmonic counterpoint with Erlich's static symmetrical major scale in 22-TET.

A possible criticism questions computer-aided composition of microtonal music in general. While we have centuries of knowledge for the rules that can be applied to 12-TET compositions, how can we know and define rules that lead to sensible music for tuning systems that have been rarely or not at all explored so far?

We don't think that this problem really exists for a composer who is interested in microtonal music to explore fresh resources. In our experience – regardless whether we compose for 12-TET or microtonal music – selecting and defining rules is actually an integral part of the composition process. When evaluating the resulting music by listening we often heard some shortcoming, which we then tried to address by a new rule. For example, the 7-limit chord figuration in Figure JI-figuration originally contained many 3-limit intervals such as open fifths between the two upper voices, which we felt sounded rather empty''. After we added a rule that required many 7-limit intervals between these voices, the resulting sound became clearly more rich, even fancy. Also, we hope the examples above demonstrate that we do not necessarily need to start from scratch when composing microtonal music. It is possible to apply certain conventional rules virtually unchanged (e.g., some melodic rules such as a treatment of non-harmonic tones). Other conventional rules are possibly generalized for microtonal music, as we showed with our generalized version of Schoenberg's directions for better progressions. In general, it is often desirable that musical results display some consistency. For example, we may want that a motif sequence shows some pattern, or we may want to avoid that the dissonance degree of a chord progression jumps wildly forth and back. Compositional rules can help to enforce consistency regarding various musical aspects, and such rules are no more difficult to formulate for microtonal music than for 12-TET.

Some limitations of the proposed model should be mentioned. Because this model integrates pitch classes, it only supports equal temperaments that repeat each octave. A counter example is the Bohlen-Pierce scale that repeats every 3/1 interval, called a tritave {Mathews et al., "The Bohlen-Pierce Scale", 1989}. Nevertheless, octave-repeating scales are particularly common. Strasheela itself has also some limitations. For example, arbitrary musical textures can be expressed with its music representation, but constraining the hierarchic nesting of score objects is severely restricted {Anders, 2007}.

The presented model supports ETs only: we are currently working on an extension for arbitrary regular temperaments. Remember that regular temperaments can also express arbitrary just intonations. Regular temperaments will be represented by a subset of the pitches (pitch classes) of an equal temperament with a high-resolution such as 1200-TET (cent) or even 120000-TET (millicent).

In summary, the case studies illustrate that the presented model allows the user to apply various constraints to a microtonal music representation. We shaped each case study in its own way with a certain set of constraints. Obviously, these constraint sets are only examples, very different constraints may be applied if different results are intended. In fact, this capability is the major strength of Strasheela.

## Acknowledgments

We are grateful for helpful comments by John H. Chalmers, John Rahn and William R. Sethares on drafts of the paper. This work was supported by the EPSRC project `Learning the Structure of Music' (LeStruM), EP/D062934/1.

## Footnotes:

1 In addition, Strasheela supports user-defined tuning tables, where the pitches of an ET are mapped to actual pitches. The format of this tuning table is similar to the scale file format of the Scala program. Each pitch within an octave is declared either as a float (measured in cent) or a frequency ratio for JI intervals.

2 This formula implements the convention that middle C is situated in Octave 4, hence the addend 1.

3 Having explicit representations of the start time, duration, and end time at the same time is not redundant as these pieces of information can be undetermined.

4 There can be rests between objects in a sequential container or before objects in a simultaneous container, either represented by an explicit rest object or the offset parameter supported by all temporal object (e.g., notes, sequential and simultaneous containers).

5 The full source code of all music theory case studies presented in this paper is available at <TODO URL>.

6 For brevity, we present any constraints of theory model case studies only in English, please refer to the provided source code for the full formal details.

7 This constraint implements a particular strict notion of a cadence, were all scale notes must sound. A less strict version requires that only pitch classes which distinguish a scale among all other likely scales are sufficient (e.g., the pitch classes G, B, and F are sufficient to distinguish C-major between all major scales) {Rothenberg "A model for pattern perception with musical applications part II: The information content of pitch structures", 1978}.

8 Only the minimal intervals towards the pitch classes of the second chord are taken into account, the Bb of the first chord is ignored.

9 Young's ratios are treated as pitch classes in this case study that can be octave transposed. For example, both the major second and the major ninth are possible. Nevertheless, the root is preserved and specially treated, see the rules for the bass.

10 When notating 22-TET with conventional accidentals for 3-limit intervals and accidentals that indicate a syntonic comma shift for 5-limit intervals, then pairs of enharmonically equivalent pitches occur. For example, C+ is the same pitch as Db in 22-TET, and so are C#- and Db+ (plus/minus indicating a syntonic comma up/down).

11 The enharmonic spelling of Figure StaticSymmetricalMajor is kept in Figure decatonic-melody-2 as well, so the scale pitches can be more easily recognized. However, doing so compromises the enharmonic spelling of the subminor 7th chord.

12 Strasheela also supports counterpoint where no explicit underlying harmonic structure is defined. For example, the Strasheela website (http://strasheela.sourceforge.net/) presents examples that implement Fuxian first species counterpoint and florid counterpoint in 12-TET. The approaches shown there can be used for modeling microtonal music as well.

Date: Spring 2010

Org version 7.6 with Emacs version 23

Validate XHTML 1.0