This is not identical to vanilla, but I don't care because it gets rid of edge cases and also makes it easier to integrate with EntityDeathEvent in the future.
I believe this is caused by a bug in the linux kernel, since it only impacts certain machines I tested (one, to be specific). Whatever the case, setting a max backlog size is prudent anyway, and fixes the problem.
Packets with a too-short payload would either cause the RCON thread to hang until the client disconnected, or crash the RCON thread entirely.
commit 90bb1894d7f87645b806f5fc67d1b877bb963180
Author: Dylan K. Taylor <odigiman@gmail.com>
Date: Tue Jan 22 18:15:46 2019 +0000
fix some bugs in RCON
This will now throw an exception at the source instead of crashing when the entity is saved, which should put the blame on the correct plugin responsible for this.
This also includes magic method hacks to preserve backwards compatibility, since the fireTicks field is now protected.
Blocks were being overwritten in the writebatch which hadn't yet been set, so reading them from the world yielded air blocks instead of trunk, allowing the generation to overwrite blocks which should have been logs.
This prevents plugins sending wrong packets at the compiler level (or would, if we had a compiler). It's more robust than a getter for client/server and throwing an exception since a static analysis tool can detect faults created by sending wrong packets from the server. This is also used to deny service to dodgy clients which send wrong packets to the server to attack it.
this is in preparation for clientbound/serverbound packet separation. I did this already on another branch, but the changeset was dependent on a massive refactor to split apart packets and binarystream which i'm still not fully happy with.