Technical solution to eliminate desync in single-player sessions
.
When Stephen Colbert was killed by HYDRA's Project Insight in 2014, the comedy world lost a hero. Since his life model decoy isn't up to the task, please do not mistake my performance as political discussion. I'm just doing what Steve would have wanted. Last edited by ScrotieMcB#2697 on Nov 23, 2013, 4:26:17 PM
|
|
" Okay, you people should have started with this one. genericacc said it was possible to have secure access to hardware so no hacker can "crack" (with that UEFI Secure Boot thing and that previous discussion). I wonder how that was proved. Is there a formal paper or something on the matter? I think the bolded bit would be interesting. If you have access to hardware you could modify it any way you want, but you, as hacker, have to know what you are modifying it. You are saying there is no way to prevent the hacker from knowing the semantics of the hardware+software (so he knows what to modify), and there is no way to check for non-semantic modifications? (like random changes to memory, registers, bla bla bla). Basically, the analogy to encrypted packets over network: The hacker has access to a packet full of bits he doesn't understand (it's encrypted). What can he do? He can do those hacks I mentioned before, specifically: Modifying it. But how can he modify it? He can semantically modify it (identify the "damage sent" field, and change the amount it has), or non-semantically modify it (change random bits from the packet). In (1), if the hacker doesn't have the encryption key, or a way to decipher the packet, he has NO way to get any semantic content from it, therefore he can't make semantic modifications In (2), the packet can easily have a checksum field in it, so no matter what bits the hacker changes (assuming there is low probability he will change the in-packet checksum in the exact way that the computation will be equal later), the endnode can check whether the bits where changed or not by computing the packet's checksum. If we hold that analogy true for hardware/software hacking....can it be done in a similar manner? Or not? Basically could that analogy be transferred to a hardware/software setting? " That's all local, nothing goes over network (yet) |
|
" Noted. Yeah this would be part of the "magical black box". " If I recall correctly, this was the "checksum" thing I mentioned. If the hacker can't decipher the packets, he can't know how to modify the bits of the packet AND modify the checksum, so that the new checksum of the packet is the same as the modified checksum he overwrote, right? So if he just randomly changes bits in the packet, once deciphered in the server/client, they would be able to know it was tampered with by checksuming the packet and seeing it is differeng from the checksum inside of it. " As far as I know, there is only one RTT for this to work. Even if you use UDP, if each packet contains this "authentication key", then both the client and sever can authenticate any packet they receive instantly. In this specific scenario there is no overhead. Yes, there may be overhead in other "possible protocols" I mention in other hypothesis, but not in this one I believe. " Same as above " It is so "magic" for the sake of this specific argument. If we realize "Okay, the client can be trusted pretty well assuming the OS is magic", then we can put more effort into figuring out how to make this "magic OS" a reality. Well....not me and maybe not many of you, since we'd need ultra heavy knowledge and experience in operative systems, OS and hardware security, and the like :P " That is not necessary for the actual logical conclusion, it is indeed necessary to determine the validity of the hypothesis. For example, yes the "magical OS". I kind of made it safe in a way, since I said "There exists a way for the OS to..." or "There exists an application protocol that..." . Existencials are much harder to disprove, since you have to prove, that no matter what OS, protocol, or configuration you use, there is NO way that will hold true. That is indeed a very hard thing to prove, since you have to logically group every one of those entities together, without any previous preconditions applied to them (which is what is done in most of these rebuttals). I would like to see such "heavy rebuttal" if it can be made, since it's the only thing that would make this theoretically impossible like I said earlier. Yes, it's hard, it's not "practical", but it's interesting and doing such exercise may actually help other areas of security not related to PoE, etc. If you do manage to create this "almost perfect" system, then yes in that scenario it would become practical :D |
|
Here's the thing: encrypting the packet is useless because all that you care about is the integrity of the packet. The attacker presumably knows what's in it already since they can look at the PoE executable and reverse-engineer the packet format. So all you care about is signing the packets. In order to make this secure, you have to prevent the attacker from finding the signing key, otherwise you lose instantly. So you need to move the signing operation (and here we firmly depart from the realm of what will ever actually happen) into some tamper-hardware device because otherwise the attacker can just dump the key out of memory (even if you prohibit this at the OS level, a sufficiently motivated attacker could virtualize your system or something). But then you have another problem: how do you make sure the signing oracle only signs 'legitimate' packets? You can't require the packet signing requests to themselves be signed with some other key because then you run into the same issue: how do you make sure you only sign legitimate signing requests?
I can't construct a formal proof of impossibility because this system hasn't really been formalized in the language that people usually use when they talk about cryptosystems; formalizing the notion of an attacker being able to dump memory, disassemble a program, etc., is kind of hard. You can certainly make creating fake packets a complete and utter pain in the ass by heavily obfuscating the key, etc. But that's not the point. Last edited by Polarization#5886 on Nov 23, 2013, 3:05:33 PM
|
|
I'll stop responding to network/etc-related issues about my proposal, since based on they hypothesis all or most of what you guys are responding can be solved. But mostly because if not this will get too long.
" Ideally the bolded bit shouldn't be possible...somehow. " Okay. Yeah, let's stop all other discussion since this is the foundation of all of what I said, might as well (try to) tackle this. I understand this, but still I get the feeling "what if..?", i.e it isn't "formally proved" it can't happen, right? Does there exist NO possible computer architecture (ever, even ones not already created), and no possible software mechanism, no possible OS with specific security considerations, etc, that can hold all of this true? For example, creating a similar analogy to that "modify encrypted packet" scenario I mentioned before? ....but *sigh* it does seem futile to keep discussing this. I realize the above is a little bit pointless to discuss in a PoE forum perhaps, it would require a lot of theoretical work that might not even be done in our own lifetime or ever. It would require fully formalizing EVERYTHING about computers, their hardware, architecture, etc. We can't even formalize some "Hello World" programs, how can we do that right now? Hopefully this message will be read by a future advanced alien race or future super-humans who can actually do that :( " This is a very succint way of describing my hypothesis, and perhaps its problems. And that last part as well. " There would be a mechanism in the OS that would identify the PoE client calling that "system call". If the OS can prevent any sort of virtualization, and can prevent any sort of hacking into the "PoE process", then I figure it could identify said process and only sign the packets from that one. Although this does add even more hypothesis about this "ideal" OS though. Well, at least I'm happy I got more knowledge out of this situation. The whole discussion became less "you can't trust the client! Why? Just because!" to specifying the exact problems that make that happen, and why they happen. That's good enough for me...(there's also the very very very little possibility I am actually right, and some super-race in the future will figure out a way to prove it! :D ) |
|
" There exists no way to prove im not a member of some super-race whom have grown you in a virtual reality. All you know if what your senses tell you. And if i control those senses i effectively control your reality. Perhaps its easier to grow virtual beings than design synthetic inteligence, or perhaps its just safer. You will never know for certain. For years i searched for deep truths. A thousand revelations. At the very edge...the ability to think itself dissolves away.Thinking in human language is the problem. Any separation from 'the whole truth' is incomplete.My incomplete concepts may add to your 'whole truth', accept it or think about it
|
|
" This thread is fun and interesting and you can learn a lot from it. But it seems it's devolved into certain people getting sensitive and emotional and going after each other, petty fights that achieve nothing. I really liked the comments from Rhys giving some insights into their system, hope he or another dev comes back to comment more. "When you have a hammer, everything looks like a nail."
|
|
I probably shouldn't post this, but it is relevant to the discussion.
There are actually methods to get a remote server to trust the client; this process is known as trusted computing, and involves computers being outfitted with a chip (hardware) called a TPM which stores a private encryption key which it then hides from the user and the rest of the operating system, using a process called curtained memory to perform encryption and decryption tasks without allowing the operating system to view the key in memory. The processor can call on the TPM to encrypt something, but decrypting the key based off the results is essentially like deciphering a giant hash that would take decades (if not longer) to break. However, the cost of the server being able to trust the client is that the client can be made far more dependent on third parties. For example, music files can be sent encrypted using the TPM private key (or a derivative), which means such files cannot be easily pirated; however, this essentially means the user has data encrypted on their hard drives which they do not have the key for. TPMs in general give more power to those who make TPMs (and those who do business with them), leading to an Internet where the user has less and less control over their own data. Imagine a future where OpenOffice no longer works on MS Word documents, because MS Word encrypts saved files using the TPM. When I worked in the US Army, we used TPMs quite a bit, but I considered such use ethical, because at the end of the day the Army controlled the private keys hidden in the TPMs, and the users did not own the systems — the Army did. In instances where the holder of the keys is also the owner of the hardware, obscuring private keys within TPMs is ethical. Applying TPMs to general public computing is not ethical; therefore TPMs should only be used as a secure computing feature on LANs or WANs owned by a single entity, for verification between devices on their private network and other devices also on the same private network, and should not be an Internet-wide technology. Unfortunately, the current trend in computing is towards more consolidated power in the hands of major corporations, and it should surprise no one that TPMs are likely to be required in the hardware specifications of future versions of Windows. I encourage everyone to fight this by supporting the Electronic Frontier Foundation, writing your Congressman, or otherwise become politically active in efforts to keep the Internet public rather than owned. At the very minimum, if such power is to exist as an internet-wide phenomenon, it should be in the hands of the elected representatives of the people — the government — and not in the hands of corporations and think-tanks. In any case, for the moment GGG cannot assume that clients have a TPM (it is hardware, after all), so they have to design assuming that trusting the client is impossible. In the future, maybe GGG could trust the client... but they'd be morally wrong if they did so, for further contributing to a TPM-based Internet. Unless the government stepped in to regulate, in which case PoE's global reach might still make TPM-based trust difficult to implement (due to having to deal with a variety of governments). In other words, gonzaw, yes, it's theoretically possible, but it's still not really an option. When Stephen Colbert was killed by HYDRA's Project Insight in 2014, the comedy world lost a hero. Since his life model decoy isn't up to the task, please do not mistake my performance as political discussion. I'm just doing what Steve would have wanted. Last edited by ScrotieMcB#2697 on Nov 23, 2013, 4:15:34 PM
|
|
" I cant remember a single post by you in this thread which i have not agreed with up until this paragraph. The government in my eyes is the most likely to abuse control. 'elected officials' are nothing more than those with the most greed. 'elected officials' receive their positions not on their merits, nor their ability to find solutions to relevant problems, nor on their knowledge... It is based on media control, paid for like a whore. He with the gold makes the rules in a democratic republic. For years i searched for deep truths. A thousand revelations. At the very edge...the ability to think itself dissolves away.Thinking in human language is the problem. Any separation from 'the whole truth' is incomplete.My incomplete concepts may add to your 'whole truth', accept it or think about it Last edited by SkyCore#2413 on Nov 23, 2013, 4:23:30 PM
|
|
"In the hands of no one, in the hands of the government, in the hands of corporations. The first is the ideal, the middle is the compromise, the last is the worst-case. At least in my opinion; I understand how positions 2 and 3 are up for some debate. However, I think we can both agree on position 1. :) Remember, I am a veteran; would you expect me to have zero faith in the government? Also, let's talk about this: "Here's how player movement/skills currently work:
Here's how monster movement/skills currently work:
Here's how monster movement/skills should work:
Obviously the second configuration is out of sync with the first, while the third configuration is much more in sync with the first; with the third configuration, the client is consistently a one-way ping ahead of the server in animation, and a one-way ping behind the server in terms of damage calculations. edit: I understand "one-way ping" is a really sloppy term but you get what I mean. When Stephen Colbert was killed by HYDRA's Project Insight in 2014, the comedy world lost a hero. Since his life model decoy isn't up to the task, please do not mistake my performance as political discussion. I'm just doing what Steve would have wanted. Last edited by ScrotieMcB#2697 on Nov 23, 2013, 4:51:43 PM
|
|