This is not identical to vanilla, but I don't care because it gets rid of edge cases and also makes it easier to integrate with EntityDeathEvent in the future.
I believe this is caused by a bug in the linux kernel, since it only impacts certain machines I tested (one, to be specific). Whatever the case, setting a max backlog size is prudent anyway, and fixes the problem.
Packets with a too-short payload would either cause the RCON thread to hang until the client disconnected, or crash the RCON thread entirely.
commit 90bb1894d7f87645b806f5fc67d1b877bb963180
Author: Dylan K. Taylor <odigiman@gmail.com>
Date: Tue Jan 22 18:15:46 2019 +0000
fix some bugs in RCON
This will now throw an exception at the source instead of crashing when the entity is saved, which should put the blame on the correct plugin responsible for this.
This also includes magic method hacks to preserve backwards compatibility, since the fireTicks field is now protected.
Encoded tags larger than 32KB overflow the length field, so we can't send these over network. However, it's unreasonable to randomly throw this burden off onto users by crashing their servers, so the next best solution is to just not send the NBT. This is also not an ideal solution (books and the like with too-large tags won't work on the client side) but it's better than crashing the server or client due to a protocol bug. Mojang have confirmed this will be resolved by a future MCPE release, so we'll just work around this problem until then.
If a plugin was involved in a crash, we can't safely blame it for the crash, since it might have innocently triggered a core bug. Furthermore, it's difficult to accurately detect plugin causing things like invalid argument crashes.
it's much less expensive to just calculate the modulo of the current tick and 20, and overwrite past entries. The effect is the same. The only difference is that the arrays won't be ordered by time, but that doesn't matter anyway.
This is better for performance because these then don't need to be reevaluated every time they are called.
When encountering an unqualified function or constant reference, PHP will first try to locate a symbol in the current namespace by that name, and then fall back to the global namespace.
This short-circuits the check, which has substantial performance effects in some cases - in particular, ord(), chr() and strlen() show ~1500x faster calls when they are fully qualified.
However, this doesn't mean that PM is getting a massive amount faster. In real world terms, this translates to about 10-15% performance improvement.
But before anyone gets excited, you should know that the CodeOptimizer in the PreProcessor repo has been applying fully-qualified symbol optimizations to Jenkins builds for years, which is one of the reasons why Jenkins builds have better performance than home-built or source installations.
We're choosing to do this for the sake of future SafePHP integration and also to be able to get rid of the buggy CodeOptimizer, so that phar and source are more consistent.
Since 3.2 there have been some runtime performance issues related to garbage collection and dynamic AsyncWorker booting. This is partly because of GC being dumb about shutting down what it thinks are "unused" workers. A worker which has been idle for a single tick is considered the same as a worker which has been idle for hours. The result of this is that on active servers, workers would get shut down and then immediately restarted because of something like chunk sending. Since booting an async worker is frightfully expensive, this causes lag spikes, which is obviously bad.
This commit changes the GC mechanism to only shutdown workers which have not been used for the last 5 minutes.