The entire point of virtualization is that you separate out the actual process from the hardware with a virtualization layer in between.
What you're paying for is uptime, not hardware.
Somewhere in the world, SpatialOS will have a data center that they either own or rent. In that data center will be a number of servers that are set up as the base metal.
On top of that base metal there will be a virtualization layer.
That virtualization layer will be able to bring worker processes up and down, and assign them to run on different servers, all on the fly.
So the worker processes running fractured in the virtualization layer could be load balanced to use 20% of the resources across 10 different servers all at once. Someone else's SpatialOS software could be load-balanced across that same set of servers.
Then if there's a big release and the number of users spike up and more users are needed, the virtualization layer can automatically load-balancing that out to use 30% of the resources across 15 different servers all at once.
If one of those servers falls over for some reason, then the worker processes running on that server will be impacted. But the others will chug along just fine, and the processes that crash can just be restarted on one of the other servers - hopefully fast enough that the overall impact to the userbase is minimal.
Then if one of those servers needs to be taken down for maintenance, then the virtualization framework is notified of the change, graciously migrates any worker processes over to another server, and then once that server isn't running anything it can be shut down. Maintenance can be performed - more RAM, upgrade CPU, change out the RAID drives, whatever. Then the server is brought back online, reintegrates back into the virtualization layer, and worker processes can start to be assigned back to that server.
The entire point of this kind of virtualization is that the client is paying for uptime and performance, not hardware. The data center and virtualization provider deal with the hardware side of things so that the client doesn't have to.
To be clear, I'm not familiar with exactly how SpatialOS does things. This is just the general outline of how cloud virtualization generally works at the datacenter level in contexts I'm familiar with. I'm making some reasonabl assumptions about how SpatialOS probably works as a natural evolution on top of that.
The thing about SpatialOS that's really interesting is that they're virtualizating at the level of the worker process. Originally virtualization took place at the level of an entire virtual server. Then Docker came along, and virtualization started happening at the level of a "container" that can be stored in an isolated segment on a given server, dropping the total footprint required for virtualization of functionality.
The industry has been moving towards the virtualization of worker processes for a while now. The implementation I'm familiar on the business computing side of things are called micro services, which work in the manner I described above. I am assuming that SpatialOS worker processes are doing something similar, because that would make sense.
The thing about SpatialOS that's more interesting than micro services is that they've set up a shared set of data (the game world) that multiple worker processes can all share and operate on. There's a distinction between worker processes that run as the back-end for a player connection and worker processes that run the background world, and there's some synchronization stuff going on there about making sure that two different worker processes don't screw things up by trying to make different changes to the same world object at the same time.
Which is basically how multi-threading works as a general programming concept. So from the outside, it looks like what SpatialOS is doing is virtualizing at the worker process in a way that is analogous to traditional multi-threading... Which means that they've completely abstracted out most of what a computer does so that you can treat the back-end as just a bunch of threads running out in the cloud, with automatic load-balancing and scaling up and down as needed to meet the load, and robust to individual server failure at the hardware level.
So my assumption could be wrong: Perhaps SpatialOS does do dedicated servers on a per-customer basis, that wouldn't be maximally efficient but it also wouldn't be crazy either.
But even here, you'd probably want to have multiple servers for the redundancy of it... Because that's kind of the entire point of why you want virtualization in the first place. And in principle, Dynamite shouldn't have to know or care about what's going on at the hardware level, although in practice it's probably a good idea if at least one person at Dynamite does know what's going on.
Anyway, impromptu and probably-badly-structured-and-confusing lecture over. Sorry for the huge wall of text. 