Home Artists Posts Import Register

Content

I'm not giving any more time estimates but v0.26 is pretty close to being done now. Mostly it is just a case of clearing up small bugs. I can hardly believe it has been an entire month since my last update, so I wanted to let everyone know that lots of progress is being made.

The large engine refactor I wrote about in my previous post is all done and seems to be working very well. 

I also had another run at getting the macOS builds notarized and that seems to be working now, so macOS users should finally be able to open Blockhead without their OS yelling at them.

I'm now working on another (much smaller) refactor to the way that object data is managed during project loading and undo/redo operations. I hope to finish this in a couple days, then finish fixing the smaller bugs that I know about, and then I should be done assuming nothing else major pops up.

The remainder of this post is awful technical stuff

Yesterday while testing things I found an awkward problem that arises during loading of a project if it contains any nested macro blocks (macro blocks that contain more macro blocks).

The process of loading project files started off pretty simple. Blockhead would read through the project file, collect all the data for the various objects (blocks, block instances, lanes, tracks, workspaces, etc.) and pretty much just load them one by one.

This worked fine up until I added manipulators. The "Destination" button in the top-right of a manipulator lets you target a specific other block on the workspace, so suddenly I had a type of block which could hold a reference to another block in the project (and there are many more places where this happens now, outside of just manipulators).

So loading things one by one in whatever order they occur in the project file was no longer good enough, because a Manipulator block might try to initialize itself before the target block has been loaded yet.

So instead of just loading things in an arbitrary order, I created a loader which first looks through the project file, collects all the objects which need to be loaded and figure out which objects need to be loaded before other objects (i.e. manipulator blocks are loaded last.)

This is a bit hacky but it has worked fine for a while. The new macro system complicates things a lot more though. Macros have their own workspaces which contain references to other blocks in the project (the blocks inside them). Those inner blocks could themselves also be macros. The inner blocks could also have cloned instances elsewhere in the project. Send/Receive blocks also add more complications.

Changes to the way blocks are saved and loaded also affect the undo/redo system, since a lot of that code is shared between the two systems.

When I first added undo/redo support I just did things in the way that made the most sense at the time. Godot has an undo/redo system where you serialize commands to run for each undo/redo'able action. When a block is deleted, all the data required to restore it in the event of an "undo" command needs to be passed to Godot's undo/redo system.

Objects need to be stored in a serializable format so that they can be passed to Godot (i.e a bunch of dictionaries and arrays). This same format is what I used to write objects to the project file when the project is saved.

Creating an appropriate series of undo commands can be really messy. I realize now that I have made things much more difficult for myself than was necessary, hence the refactor I am working on at the moment which should massively simplify things.

The nightmare undo/redo case in Blockhead is when the last instance of a macro block is deleted. The inner blocks inside a macro can be clones of blocks which also have other instances elsewhere (e.g. there could be two instances of a sampler block, one inside the macro and one outside the macro.)

When the macro instance is deleted I have to look at all the inner blocks and check if they have any other references outside the macro. If not then those blocks are going to be deleted from the project, so undo data needs to be generated so they are restorable. The undo data for the inner blocks being removed needs to be stored with the undo data for the macro block itself. Then when the macro deletion is undone, during initialization it needs to re-create all those blocks by loading them from the undo data.

What I am doing now, which should make all this much less stupid, is creating a central store for restorable object data. This is just the data that is required to load or restore a given object, and nothing else. Any time an object needs to be saved or loaded, it is written to the data store and can be looked up later via its unique ID. Once object data is written to the store, it just stays there until it is either overwritten by a new version of the data, or the project is closed.

Now, when a project is loaded, before any objects are loaded properly, I first go through the project file and just write their data to the central store. Then I can start loading objects in pretty much any order. If an object is being loaded and requires a reference to another object in the project, it first checks to see if the other object is already loaded. If not, it gets the data from the store and loads it on the fly.

This same system is now being used to make undo/redo stuff much simpler. Whenever an object is deleted, it first writes its data to the store. Then it creates an undo command for Godot which is basically just "restore the object with this unique ID".

When the undo operation happens, I can now just use the ID to look up the data from the store. Memory wise I think this is also much better than pushing all that data into Godot's undo/redo system as I was doing before.

The nightmare Macro deletion case is now much simpler. When the macro is deleted I can just go through all its inner objects and save their data to the store, just in case they need to be restored during the undo operation. If they don't then it's no big deal, since the data is pretty small (sample data is handled by a separate system.)

Then when the macro deletion is undone, when the macro is restored and reinitialized, it can just grab any existing references to its inner objects or reload them on the fly from the data store just like what happens when the project is loaded.

One of the nice things about this new system is I am now just dealing with plain C++ structs for the object data instead of having to manipulate dictionaries everywhere. I still need to eventually encode everything to dictionaries when the project is saved/loaded to/from a project file, but that step is now separated out from everything else.

Comments

No comments found for this post.