Technical solution to eliminate desync in single-player sessions
"I actually was thinking TCP for player-to-server communication. The practice of jumbo frames by ISPs is kind of dreadful, but reliability in getting client commands rather than having them get lost is very important. Keep in mind that the client isn't streaming anything to the server, because the server doesn't trust the client; the only thing the server cares about is where you're clicking, not where you say you are. For example, RTSP (the hybrid TCP/UDP protocol used to stream video on sites like YouTube) relegates user commands (such as "play" and "pause") to TCP, despite using UDP for content delivery. I'm advocating UDP solely for transfer of ephemeral data from server to client. That said, I definitely think UDP is something GGG should try. A small Wireshark sample of playing PoE showed me receiving 410 packets with a total size of 39738 bytes over 21.8 seconds (all of which were mid-combat; I took a high-level character and was surrounded by monsters from Normal Mud Flats for the entire captured segment). TCP has 20 bytes per header, while UDP has an 8 byte header size, saving 12 bytes per header for a total savings of 4920 bytes, which is 12.3% of the total. That means that GGG could increase server-to-client communication roughly 14% by going UDP without using more bandwidth, the only cost of which is that packets which would have gotten there late, due to getting lost along the way, instead simply wouldn't show up at all... but when you're dealing with monster positions, where information quickly becomes outdated, this isn't a very big price to pay, because late information is pretty worthless anyway. And since that's something GGG hasn't tried yet, I really don't think OP's proposal is the right call, at the very least not for the current time. qwave can go on and on about how security issues with his suggestion are mitigatable, and to a certain extent I agree that they are, but such security issues can never be fully mitigated. Still, it's a proposal that has a latency pro in exchange for a security con, which isn't the most unreasonable offer in the world. But until we know that GGG cannot fix the problem without adding additional security vulnerabilities, there is no good reason to push for it. Yet. When Stephen Colbert was killed by HYDRA's Project Insight in 2014, the comedy world lost a hero. Since his life model decoy isn't up to the task, please do not mistake my performance as political discussion. I'm just doing what Steve would have wanted. Last edited by ScrotieMcB#2697 on Nov 18, 2013, 10:47:16 AM
|
![]() |
" He's saying the following: - At start he simulates a lag spike of just below the maximum timeout duration, so 1 second. Let's say during this time he saves up 10 packets. Server will assume the client is still in the loading screen or something like that. - From this point on he starts sending packets normally starting from the saved up queue and adding new packets onto the end, thus maintaining a constant connection whilst keeping a queue of 10 packets ready at all times. - If a death packet is added onto the end he'll force a delay between his cached packets of just below a second, with 10 packets this means he gains around 9 seconds total before he's forced to send the death packet whilst still maintaining what seems to be a constant connection. - He can now inject packets, like using a portal scroll and using the resulting portal, in between cached up packets. Timestamps are easily corrupted to match the new timeflow he's creating. In the end he's sending out a constant stream of packets with no more than second delay between them. Your server has no way to know that these were saved up in advance and are being manipulated to avoid a death. My vision for a better PoE: http://www.pathofexile.com/forum/view-thread/863780
|
![]() |
" On the other hand you don't understand it. I'll be sending all the snapshots in your dedicated interval. I'll just postpone the send by 0.2 secs! While in reality I'll be gaining 0.2 secs of what player really sees on the screen. Each snapshot will just be very slightly delayed gaining me time. You would have to check the times when you received the snapshots against real times to prevent this. Even in that scenario I'll change my tactics to just pretend I'm waiting 5 secs at start while allowing client to actually do actions. I'll alter the snapshots time so that it's like I was waiting 5 secs at start (client loaded slow). There will be no difference for the server! Even more sophisticated thing is that I'll be inserting small player waits on place over-time when I detect it's appropriate! :-) Remember I know the distance to monsters or that they move! MY CHALLENGES ARE DONE ON HC, IT'S NOT SC GUYS! Last edited by Filousov#5457 on Nov 18, 2013, 10:55:57 AM
|
![]() |
You would only save 12% of the total if you converted all the packets to UDP. There simply isn't enough savings to warrant the loss/out of order packets which can worsen desync.
Of course GGG has tried UDP. It's one of the most basic design choices you make when developing a multiplayer game. |
![]() |
Filousov: Let me get this straight. You are going to write some software which automates the game by generating artificial packets using a cryptographic hash against a random seed, inject them into a buffer, and simulate the entire game using a client-less solution? The only way you could have a 'queue' of snapshots buffered is if you modify the game client's entire simulation speed. So yeah, you will need to hook the entire draw buffer and entity repository. You might as well recode the game client while you're at it. Also, you would need a supercomputer and advanced neural network to produce a snapshot buffer that large and sophisticated (to mimic a human player).
I promise that there is nobody in this community (or possibly on this PLANET) that will go through these lengths of reverse engineering to reduce their odds of dying. If you are this smart, you can already bot the entire game in its current state flawlessy. Last edited by qwave#5074 on Nov 18, 2013, 10:59:50 AM
|
![]() |
@qwave
You proceed from a false assumption. It's not a bug, it's feature. And remember, it's not a lie, if you believe it. |
![]() |
"In all seriousness, 2% savings would be sufficient. Old monster position packers are worthless and not even close to being worth the effort of rerouting them once they're lost. And no, they haven't. When Stephen Colbert was killed by HYDRA's Project Insight in 2014, the comedy world lost a hero. Since his life model decoy isn't up to the task, please do not mistake my performance as political discussion. I'm just doing what Steve would have wanted.
|
![]() |
ScrotieMcB, do you honestly believe that if they had 2% more bandwidth, they could eliminate desync? Your suggestions are really starting to move outside the realm of possibility now.
Last edited by qwave#5074 on Nov 18, 2013, 11:01:24 AM
|
![]() |
" I believe it's actually much more easier - you just need to reverse engineer the place where packets are send and update time in there. It's all! You don't need to re-code much of client for 5 secs waiting strategy at start. You just inject a code to your service which will do the change in time in the packets send and store the packets for 5 seconds. P.S.: And any of your cryptography is utterly useless since the cryptographic routine will be used on already changed packet! :-) MY CHALLENGES ARE DONE ON HC, IT'S NOT SC GUYS! Last edited by Filousov#5457 on Nov 18, 2013, 11:02:10 AM
|
![]() |
Filousov: How do you generate the packet buffer? You will need to use a cryptographic hash against the seed on every permutation in order to generate a packet buffer for the FUTURE. Do you have some sort of quantum processor at your disposal?
You can't just 'change' a packet without disrupting the entire snapshot. Any changes to any packets would require you to re-calculate the complete simulation. Last edited by qwave#5074 on Nov 18, 2013, 11:04:25 AM
|
![]() |