this is a partial fix for #2717, but still not ideal because it'll spam whenever a chunk is attempted to be generated. However, fixing this properly requires potentially breaking API changes.
this now has a mouthful of a name. I'd like to invert it, but I can't do that without silently breaking backwards compatibility, which is unacceptable.
not ready to call this "fixed" yet because any chests that were already affected by the bug will still be affected. This change will prevent the creation of more broken chests like this.
Plugins would indirectly trigger permissible recalculation too early in the login sequence, which then caused their permissions to be recalculated and subscribing them to the broadcast permission far too early.
generating 1 large bounded random costs the same as generating 4 small ones, so it makes more sense to do it like this instead.
Note that prior to 7.1 this code would not work due to it not handling 64-bit appropriately.
In vanilla it doesn't drop the exact number of points you collected. Rather, you lose a little for every level above 1 you had (1 level requires 7 points, later levels require +2 per level), and can recover at most 100 points. Hence, if you had 10 levels, you get back enough points to fill 5 levels and most of a 6th. 14-15 levels gets you the upper bound of about 7.5 levels.
This is not identical to vanilla, but I don't care because it gets rid of edge cases and also makes it easier to integrate with EntityDeathEvent in the future.
I believe this is caused by a bug in the linux kernel, since it only impacts certain machines I tested (one, to be specific). Whatever the case, setting a max backlog size is prudent anyway, and fixes the problem.
Packets with a too-short payload would either cause the RCON thread to hang until the client disconnected, or crash the RCON thread entirely.
commit 90bb1894d7f87645b806f5fc67d1b877bb963180
Author: Dylan K. Taylor <odigiman@gmail.com>
Date: Tue Jan 22 18:15:46 2019 +0000
fix some bugs in RCON
This will now throw an exception at the source instead of crashing when the entity is saved, which should put the blame on the correct plugin responsible for this.
This also includes magic method hacks to preserve backwards compatibility, since the fireTicks field is now protected.
Encoded tags larger than 32KB overflow the length field, so we can't send these over network. However, it's unreasonable to randomly throw this burden off onto users by crashing their servers, so the next best solution is to just not send the NBT. This is also not an ideal solution (books and the like with too-large tags won't work on the client side) but it's better than crashing the server or client due to a protocol bug. Mojang have confirmed this will be resolved by a future MCPE release, so we'll just work around this problem until then.