By - Exact-Yesterday-992
Scaling websockets is easy. They're effectively long-lived HTTP requests (under the hood), so you can throw a load balancer at the problem and route incoming WS requests to however many backend servers you want. Badda bing, badda boom, problem solved.
So that's not your problem. Your problem is almost certainly going to be in how you scale access to the backend game state. I.e. If Player A is connected to server X and Player B is connected to Server Y, how do server X and Y agree on whether Player A shot Player B?
The reason is that scaling network connections tends to be an `O(N)` problem, while adding players tends to be an `O(N x (N-1) / 2)` problem. Network load increases linearly while CPU load increases exponentially. Ergo: CPU melts before network falls over.
This is compounded by the fact Node is single-threaded(ish), so only uses one CPU core(ish) out of the box. At some point you're going to notice that 63 of the 64 CPU cores on that nice big EC2 instance you bought are going unused.
To solve that, you're going to turn your attention to `worker_threads` or `cluster` (or whatever multi-process solution you choose for connecting node processes together.) But all of those require a fundamentally different approach to game state coordination. Instead of direct in-memory access you have to use IPC messaging of some sort. E.g. worker thread messages.
(Huh... TIL: [BroadcastChannel](https://nodejs.org/api/worker_threads.html#class-broadcastchannel-extends-eventtarget) is a thing in worker threads.)
That transition from direct memory access to messaging is where the real pain of scaling lies. And it starts as soon as you need to have more than one process. Once you solve that, scaling to multiple servers is relatively easy. Messaging across servers is just like messaging across processes (kinda), so you can spin up a messaging service of some sort (Redis? MQTT?) and route your messages through that instead.
***However*** ... all of the above said, you probably shouldn't be worrying about any of this yet. 99 out of 100 people who say they expect 10,000 active players never even get to 100. So start with one small server and don't worry about scaling yet. Build your game with the most naive server architecture you can get away with, because dealing with IPC messaging is a pain in the ass and a waste of time early on. Instead, focus on building a game that's compelling enough you can convince 10 of your closest friends and family to actually want to play it on a regular basis. That is a \*much\* harder problem than scaling.
If you can do that... well... then you actually have a scaling problem. Good luck.
Worrying about scaling is incredibly wasteful. I'd argue that 99 out of 100 developers prematurely optimize for performance, wasting time and effort that should've gone into experimenting and figuring out what works
My lesson learned: celebrate performance issues, most projects never reach that phase. It's an indicator of success.
Yeah my normal approach to optimization (at least when working on my own projects) is to just code the first thing that comes to mind and then insert comments where optimizations could be made, describing what could be done differently
Apologies in advance for being pedantic, just adding some background!
Websocket 's initial handshake is a HTTP, but after that's it's no longer HTTP. It can't be valid HTTP because both sides keep sending messages in a way that's not HTTP compatible. The server initially also responds with `101 Switching Protocols`, effectively indicating that the existing TCP socket is now used for something else.
This is also the reason why websockets don't run on existing HTTP/2 servers yet. If it were fully compatible with regular HTTP, they could be made to work on HTTP/2 and HTTP/3 servers. New websocket protocols on top of the HTTP/2 and HTTP/3 frameworks is underway though.
By contrast, 'Server-Sent events' _is_ HTTP, and does work out of the box on top of HTTP/2 and 3.
No worries. I have a soft spot in my heart for pedants. :-)
You're absolutely correct about all of the above. Websockets over HTTP/2 presents some issues (a fact I wasn't aware of, so thank you for that!) and is certainly worth reading up on. And, yes, OP might want to consider SSE for that reason.
This isn't really a scalability issue, though. You're going to be making the same decisions about your transport whether you're supporting 3 users on a nano instance or 10,000 users across a multi-instance service, right?
Yeah no comment on anything else and I don't have an opinion about scale or picking SSE over websocket. I just like talking about protocols :P
Speaking of SSE, did you know that you can pass a `ReadableStream` as the body of a `fetch()` call? That combined with SSE and you have a pseudo-WS using the HTTP protocol!
It creates two independent one-way streams of information instead of a single full-duplex stream, but at least it's 100% HTTP.
Do it with one server. If you run into limitations you can split it. YAGNI.
Hey, so I actually had someone reach out to me over [https://github.com/kartikk221/hyper-express](https://github.com/kartikk221/hyper-express) regarding an almost similar situation as you for deciding how to properly write realtime communications infrastructure for their multiplayer game. Here were the things they discovered throughout the development process:
1. You should try and maximize the throughput of your webserver to get the most performance out of each instance hence why they used HyperExpress.
2. You should develop your infrastructure in a cluster model (So multiple servers) as that is the future proof way to scale.
3. The way you segment your logic across the multiple servers depends entirely on the type of multiplayer mode your game implements and how much data it is transmitting back and forth.
4. So for example, If you have a battle royale type of game wiith 100 players and each player is transmitting data in quick succession for actions, then your server will also likely be doing some CPU processing on the incoming actions to keep track of the true game state on the backend. In this case, It may make sense to run one or multiple of these battle royale matches on a single small server instance depending on how much resources each of these matches will use.
All in all, If you already have a decent user base which in your case I would say you do with 10k active players, you should look into making your backend easily scalable with the ability to run on multiple small servers because you will eventually hit a limit with bigger servers. Also, you have a lot more redundancy with multiple small servers vs. having a bigger one so It will result in a better user experience overall.
AWS offers something that might be of interest https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html
You should use multiple server, and you can manage socket connection running on multiple instance via
Please don't use it. socket.io-redis doesn't scale at all and can greatly reduce the performance of your entire cluster. My game managed to scale to 20K simultaneous connections. You can see in this thread what happened when we replaced socket.io-redis by a custom rabbitmq adapter. [https://twitter.com/Vardiak/status/1254836153922052096](https://twitter.com/Vardiak/status/1254836153922052096)
>so i realize web socket is hard to scale
Why do you say that?
*So i realize*
*Web socket is hard to scale*
*Why do you say that?*
^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/)
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
if you have multiple server instances, and maybe an api gateway or load balancer, there’s some complexity to deal with
That would be true of HTTP as well.
Not that ws doesn't use http tbh
It uses HTTP for the handshake, but then switches to a different protocol.
Web socket working with memory, if you have multiple instances you should share the memory between them or they won’t find things cached in each other’s. Redis is an in memory data store software where you can connect each instance to share the same memory and find each other’s data while it can be scale standalone. Simple magic lol
I love how everyone's sitting here talking about server load and nobody even asked what the game is.
This is a radically different question for something like chess or poker, which has an expected 50 byte datagram every 45 seconds with no practical lag sensitivity, or a first person shooter, which is constantly streaming high rate high volume time sensitive data.
Anyone answering you without asking what you're building is bullshitting and shouldn't be listened to.
Is a bigger server better than multiple small servers? It really, *really* depends.
Consider an MMO. Sharding is basically necessary, but now all the shards have to talk to each other to keep the world state up to date. This is an enormous amount of work; probably more than the rest of the game.
Consider a mass game where everyone interacts with no concept of distance. Sharding is now basically a poison pill, because you've added an o^n traffic requirement to your servers and a minimum of two hops to any message, with no practical improvement.
Consider a party game like JackBox, where groups are maybe a dozen people tops. Sharding is an absolute no-brainer, and will make the system substantially easier to work on, and much more durable against failure modes.
Consider a game like Magic: the Gathering Online. The backends should be sharded, but the matchmaker cannot be.
You haven't given any practical information on what you're writing. No answer you've been given is trustworthy.
>Anyone answering you without asking what you're building is bullshitting and shouldn't be listened to.
> You haven't given any practical information on what you're writing. No answer you've been given is trustworthy.
Plenty of comments here are "trustworthy" based on the input given. OP doesn't need to worry about scaling right now because they have nothing yet that needs to scale. Make the game fun first, then worry about scaling, is the most practical advice people are giving here.
Programmers sure do love to put the cart before the horse though.
If you optimize your node js server one instance will be enough. https://blog.jayway.com/2015/04/13/600k-concurrent-websocket-connections-on-aws-using-node-js/
Golang is your solution, come to the dark side my child