Home

2024

Worklog

LETSGO Game

Procedurally Generated Symphonies

Tags
SystemMusic TheoryDesign
Owner
J
Justin Nearing
💡
So this was originally written in Systems. But, its turned into more of an Epic, rather than the individual iterative features to work on. I wasn’t exactly sure what the next feature was I wanted to work on, and used this page to dump all the design goals I wanted. From there I narrowed it down into specific Systems.

Procedurally Generated Symphonies

I’ve gone on a bunch of tangents thinking about the different things that can be done and/or needs to be done in the future in order to fully realize the vision here.

Let’s break it down into a single iterative improvement.

Right now, the only notes that play are when the Player steps on the platform.

I want the Player to be more of a Conductor of a musical composition than the one playing the instrument.

The easiest way to move in this direction is to have notes playing between the platforms the Player steps on.

Note Containers

Pattern Generation

I don’t think building patterns from Notes would be especially taxing. I → IV → V progressions should be fairly easy to codify at this point.

I think focusing on rhythm at this point would be more beneficial.

Especially since we can start with the kick drum. A single, “pitchless” note.

At first we define four on the floor.

Then 2-4

Maybe 1-3

Notes:

image

What we on about?

Storing notes.

Let us assume 4 beats in a bar.

Quartz gives us those beats, we need to fill those beats with notes.

That could be 4 notes of a scale

Could be 3 notes held in unison for a whole bar. (Theorists call that a chord- specifically a triad)

Might be a snare.

Might be a synth.

Might be an adlib shouting “AI ain't shit!”

(sorry)

Point is, we need a container of sounds to instructing which sound cues to play on a beat.

Notes should have a relationship to the beats in a bar.

4 notes in a 4 beat bar would obviously be quarter notes. But straight quarter notes forever gets boring.

This establishes a relationship between the notes to be played and the length of each note.

Generative Context

The overall intent of LETSGO is to have a musical composition that is half generative, half player driven.

Assume the player is given a choice of 3 notes at the beginning of play. They choose one of those Notes.

This sets the tonic for the composition.

An amazing amount of information can be generated on that single choice— think of the scales appropriate to said tonic. This is the entire point of the Music Theory Engine

The player is presented another note to select: minor 3rd, major 3rd, whatever note would make it mixolydian, etc.

Now you have 2/3s of a triad, easily inferred. Allows you to generate a progression of triads.

That gives you more than enough to layer in a melody on top.

Each play session has the player and engine mutually building a generative musical composition.

What we have now

Currently, we have this concept of a SpawnPool

It’s a component of the Music Platform Spawner that contains the Notes each Platform will spawn.

It’s basically just an array of Notes.

The GenerateScalefunction is called at the beginning of play, filling the pool with all the notes in all the scales for a tonic chord.

Each spawned platform will “pop” the first item from the array, which will be the next note of the scale.

Constructed Note

What do you need to play a single note?

[Note, Length, Octave, Instrument]

[C#, quarter note, 2, synth]

Progression Context

What do you need to play a constructed note in a musical composition?

[Constructed Note, Bar to Play, Position in Bar]

[synth_quarterNote_c#2, 4th bar, 3rd note in bar]

Musical Context

What do you need to play a constructed note musically?

[Notes relation to tonic, tension/resolution degree, other notes/instruments being played]

[minor 3, (?), [bass = tonic whole note, kick playing, crash playing ]]

That’s a lot of stuff to track and build. So lets take it one at a time.

Constructed Note

[Note, Length, Octave, Instrument]

[C#, quarter note, 2, synth]

Note

Enum containing the note, already built.

Length

Whole
1
Half
2
Quarter
4
Eighth
8
Sixteenth
16

Sum of lengths cannot exceed bar. I mean it can, musically speaking you just tie the musical phrase into the next bar, but as an initial rule I think keeping the notes within the bar is probably a good start.

Octave

Ok so this is interesting.

The workflow so far is to open Ableton, pull up a VST plugin with some synth, and start exporting notes.

It’s a metric shit-ton of work.

For each supported instrument, supporting each note from octave 2-6 = 5 * 12 notes to export.

60 notes no cap. Pianos have 88 keys- 11 octaves. Get outta here.

Which means either I seriously look into trying to access a VST directly OR consider the octave to be dependent on the instrument.

Instrument

From Unreal’s perspective, there are no instruments, just a collection of sound files that have the grouping of Octave/Instrument.

Essentially we need a map:

Constructed Note Cue Map

Note
Octave
Instrument
CueName
C
1
Cheese Strings
Cheese Strings_c1_wav_cue
C
2
Cheese Strings
Cheese Strings_c2_wav_cue
Db
1
Cheese Strings
Cheese Strings_db1_wav_cue
D
2
Cheese Strings
Cheese Strings_d2_wav_cue
E
6
Cheese Strings
Cheese Strings_e6_wav_cue

The Theory Engine deals in Notes, Octaves, and Instruments- determining to play:

  • Note: E
  • Octave: 6
  • Instrument: Cheese Strings.

Using these 3 criteria we can determine the appropriate sound cue using the map above.

This picture also shows why moving cue mapping to code would be beneficial:

image

That’s a lot of wires I have to mouseclick- and this is for a single octave. A piano has 88 keys.

Untenable.

Requirements

Create Note → Cue Mapping in C++ that will allow us to programmatically choose appropriate sound cues from Note/Instrument/Octave
Create a