Home

2024

Worklog

LETSGO Game

Designing The Core Gameplay Loop
Designing The Core Gameplay Loop

Designing The Core Gameplay Loop

Tags
DesignEngineeringC++Dev LogCode
Owner
J
Justin Nearing
🎶
This is part of a ongoing series called Building A Music EngineBuilding A Music Engine

It documents the process of me smashing my head against the keyboard to build a game called LETSGOLETSGO

It’s gotten long enough to break into several sections:

This chapter started as a rubber duck for how I wanted to build the core gameplay loop for my game LETSGOLETSGO.

The intent of the game is to generate some kind of musical composition at runtime.

The player is presented with different choices, which provide the direction of the song:

Instead of random platforms, I want to present the player with 3 platforms, and the one they step on sets the key the rest of the song will be in.

Then another set of audio platforms appear which will set whether the song is in major/minor.

Then show platforms that will select which instrument to use.

And follow this pattern until something resembling music is playing.

In this chapter, I will explain the design I came up with, as well how I got here.

So, LETSGO!

What I ended up with

After a couple days of thought, here’s the software design I have come up with:

image

The intent is to create a musical composition throughout the course of the games runtime.

This is achieved in Phases

Phases contain a set of Actions that describe how to resolve the Phase.

Phases implement a PhaseController interface that describes the common commands necessary for controlling the lifetime of the phase.

A Phase Manager owns a set of PhaseController, responsible for issuing the commands for each Phase’s lifetime.

When a Phase is active, it invokes commands to an ActionExecutor, which more-or-less maintains a queue of Actions to cycle through.

So, when a SetTonic Phase is activated, it will send a Create Audio Platform Action to the ActionExecutor, and start listening for a Player Stepped on an Audio Platform event to be triggered by any AudioPlatform.

When it receives that event, it will consume the Note that event fires with, and use it to update the “Tonic” in our State object.

This means that our State, Phase management, and gameplay execution is logically separated from each other.

This is the design I’ll start building out in code in the next chapter.

However, I want to describe how I got to this design.

It didn’t happen overnight- it took a solid week of refining an initial idea into what I presented above.

I want to share where I started from, and how I got to this final design.

Step One: Map out a rough idea

When painting a picture, you rarely want to start with details- jumping straight into contour, line work, etc.

Often, a better approach is to use a wide brush to start blocking in general shapes using broad strokes.

With that in mind, this is the first broad strokes plan for the core gameplay loop:

image

I am imagining a Composer that is responsible for managing the musical composition at runtime.

I’m thinking of a separate Conductor that acts as the intermediary between player actions and composer.

You’ll notice that in the final design, I don’t have a Composer or Conductor class- but this orchestral conceptualization that frames how I’m thinking about the design.

The Conductor doesn’t make up the music, it merely tells the instrumentalists what to be doing, and when.

Similarly, the Composer doesn’t actually play the music its creating, its only defining which notes each instrument should play.

In context of my game, these two entities interact with each other through Phases.

Phases describes the actions the Conductor needs to take, and the data the Composer needs to move to the next Phase.

The Phase “SetTonic” gives the Conductor the action “create audio platform”, and when the platform is stepped on, tells the Composer we’re in the key “D flat”.

What I want to achieve from this design is that the logic concerned with creating the musical composition is separate from the actions the player takes.

A more naïve approach would have the state, the musical composition, and the gameplay actions all tightly correlated.

I am working very hard in this design to have these as separate logical domains that are decoupled as possible.

Event Driven Gameplay

So I have this “Composer” domain of logic, which I want separate from the “Conductor” domain.

But they still have to interact with each other.

One pattern I like to use for separating logical domains is through event invocation.

I’ve used Event Systems in Unity with C#, and I feel its appropriate for this domain separation.

Unreal supports an event system through the use of “Delegates”:

Basically we define that some class will fire an event, and set up other classes to “listen” for that event to fire.

The nice thing about how this works is the thing listening for the event doesn’t have to know anything about the thing triggering the event.

⁉️

This lack of knowledge can also be a downside.

The listening object doesn’t have access to the state of the object sending the event.

This can get you into trouble if you’re needing some data and haven’t designed your system correctly.

This idea of having an event system was enough to refine the initial plan into this:

image

This is OK.

I have the domain separation which I’ve deemed very important, and have some kind of idea of those domains throwing events around at each other.

The problem here though this concept of a Phase Resolution Object which contains a property Note.

The method signature for the UpdateComposer would be something like:

void UpdateComposer(PhaseResolution input) {};

This assumes a class called PhaseResolution exists, which may or may not contain the property Note, and I’m going to have a hell of a time figuring out what to do with the many kinds of PhaseResolutions available, the properties they may contain, and the logic that needs to be triggered on resolution.

Not great.

Describe Your Data First

At this point, I decided to take a look at this concept of a Phase more carefully.

It’s kind of the glue connecting our logical domains together.

So here I map out an example of what a PhaseManager type entity would be concerned with:

Name
State
Eligible
Repeatable?
Eligible Phase
Set Tonic
Complete
False
False
Intro
Set Third
Currently Active
True
False
Intro
Set Mode
Pending
False
False
Intro
Bass Drop
Pending
False
True
Bridge
BPM Switch
Pending
True
True
Bridge, Chorus, Outro

What this tells me is that there needs to be an object representing Set Tonic

During the game, the PhaseManager will activate the Set Tonic phase when eligible.

The intent of this Phase is to present the player with a choice of Notes:

image

When the player steps on the F# platform, it needs to tell the Composer that the Tonic for this composition is F#.

However, something like a BPM Switch phase does not update the state in this way. It has no knowledge of “notes”. It changes a completely different variable owned by a completely different entity.

So let’s pseudo some code:

class SetTonic { 
	ComposerState State; 
	
	void Activate() {
		StartListeningForEvent(PlayerSetNote(Note note));
	};
	
	void OnPlayerSetNote(Note note) {
		State.SetTonic(note);
	};
	
	void Deactivate() {
		StopListeningForEvent(PlayerSetNote(Note note));
	};
};

The idea here is when the F# platform is stepped on, it fires a PlayerSetNote(F#) event.

And our SetTonic phase is listening for this event to be fired.

But only if the PhaseManager tells it to do so- on activate, it starts listening for that event, and will only set the States tonic property when active.

The idea being the BPM Switch phase has its own class defining what it needs to do when the phase is active.

Using interfaces to manage common commands

Here’s the thing, I don’t want our PhaseManager to contain a SetTonic object or a BPMSwitch object.

It doesn’t care what each Phase does, it only cares about managing the Phases lifetime, which is only a small part of what each Phase is.

For these kinds of cases, I like the idea of using interfaces to explicitly define how the PhaseManager expects to manage each Phase.

An interface is basically an abstract class that defines a few empty methods.

class PhaseController {
	void Initialize() {};
	void Activate()   {};
	void Deactivate() {};
};

Our SetTonic class then implements the interface:

class SetTonic : PhaseController { 
	ComposerState State; 
	
	void Initialize() {
		GetComposerState();
	}
	
	void Activate() {
		StartListeningForEvent(PlayerSetNote(Note note));
	};
	
	void Deactivate() {
		StopListeningForEvent(PlayerSetNote(Note note));
	};

};

Because we define SetTonic as a PhaseController, it must have the methods we defined in the PhaseController class.

The useful part of this pattern is when we define our PhaseManager:

class PhaseManager { 
	TArray<PhaseController> Phases;
	
	// As a very rough implementation
	void ActivateNextPhase(){
		Phases[0].Deactivate();
		Phases[1].Activate(); 
	};
};

Here our PhaseManager only knows SetTonic in terms of the methods defined in PhaseController.

It doesn’t know about the method “GetComposerState”, it only knows the activation/initialization methods.

Executing Phases through Actions

So, we have a way of managing the lifetime of a Phase.

Now we need to connect it to the “Conductor”.

The idea I had for the Conductor is that it only cares about what gameplay actions are necessary.

It should not have any knowledge about the composers state. It don’t care if the Tonic has been set or not. It has no knowledge of a Tonic.

Even in the case where the SetTonic phase is currently active, I don’t want it to know anything about the “SetTonic” object itself.

It is only concerned with executing a set of Actions.

An Action itself is a functional gameplay operation.

It’s “Spawn 3 AudioPlatforms in front of the player”

“Reduce volume on synth to 0”

We’re literally describing what to do- not how to do it, or why we’re doing it.

The best place to define what actions a SetTonic phase needs to do is in our SetTonic object:

class SetTonic : PhaseController { 
	ComposerState State; 
	
	// The list of gameplay actions to send to Conductor
	array<Action> Actions = {
		SpawnAudioPlatform, 
		TriggerHarpGliss,
	}
	
	void Initialize() {
		GetComposerState();
	}
	
	void Activate() {
		StartListeningForEvent(PlayerSetNote(Note note));
		
		// When the phase is activated, send actions to Conductor to be executed
		SendEvent_AddToConductorQueue(Actions);
	};
	
	void Deactivate() {
		StopListeningForEvent(PlayerSetNote(Note note));
	};

};

Here we’ve updated our SetTonic phase with a set of Actions.

The idea is to trigger the audio platforms, and when a platform is stepped we trigger a harp gliss (a ascending group of notes performed on a harp)

I think this works- the harp gliss Action waits for the same PlayerSetNote() event that the SetTonic phase object does.

When it has receives this event, it can use that F# to determine which audio cue should be fired. (I’m imagining a bunch of audio clips containing the harp gliss sound, one for each note. When it has the PlayerSetNote “F#”, it fires the FSharp_HarpGliss.wav sound)

I may have to put some thought into how exactly the Conductor is managing the Actions to be run.

There may be multiple Actions happening at the same time, so naively processing an array might not work.

But we can reuse the patterns defined in our Phase management for Action management:

interface Action {
	void Initialize();
	void Activate();
};

class TriggerHarpGliss : Action {
	Scale Scale;
	Note Tonic;
	KVPair AudioCues;
	
	ComposerState State;
	
	void Initialize() {
		Scale = State.Scale;
		Tonic = State.Tonic;
	};
	
	void Activate() {
		// KVPair value of key {Scale, Tonic} to trigger audio queue
		// This would need UnrealQuartz access 
		TriggerAudioCue(Scale, Tonic);
		
		SendEvent_RemoveFromConductorQueue(this);
	};
};

class Conductor {
	Queue<Action> Actions; 
	
	void RecieveEvent_AddToConductorQueue(TArray<Action> actionList) {};
	void ReceiveEVent_RemoveFromConductorQueue(Action) {};
	
	void ProcessQueue(){
		Actions[0].Deactivate();
		Actions[1].Initialize();
		Actions[1].Activate();
	};
};

How it’s processing that queue is still open for interpretation, but this gives us a reasonable looking initial design for our final product.

Now that’s all pseudo-code, but I think this is a reasonable implementation for the core loop of my game.

In theory it should allow me to write a wide range of Actions and Phases, allowing specific implementations of each that vary widely from each other.

I do have to be careful with cases where I’m firing/consuming events.

Order of operations depending on when an event is fired can become somewhat complicated.

But we’ll cross that bridge when we get there.

In the next chapter, I’ll start actually writing some damn code, implementing the above design into C++.

I’m sure that things will fall apart, as I’m sure there are assumptions I’ve made in this design that will not reflect reality.

But, I will have a plan going in, and that is often half the battle.

In the next chapter, I start actually trying to implement this design in code. I expect to get rekt by the experience:

Rubber Duck