this uses the unused bits on the Y component to expand the X/Z axes to 2^27 blocks long instead of 2^21 permitted by a regular morton3d code. It does require the sacrifice of an additional bit on the Y axis, but the performance advantages are more than worth it.
I'm exploring how realistic it would be to just eliminate blockHash global usage (currently in lighting updates and explosions). This would allow scaling up to 2^32 without larger hashes (morton2d for chunks).
this allows plugins (and maybe later on the core) to detect async worker overload and warn the user about potential performance issues.
I planned to implement such detection in the core directly, but it turned out to be a bit more complex than I anticipated. At the least, this API might be useful to someone else.
literally nobody uses this. I don't think anyone even knows it exists.
It's also an obstacle to separating event handler registration from PluginManager.
Previously, upwards movement wouldn't be considered, but downwards would, so if an entity bobbed up and down in the air for a while (e.g. while being comboed in PvP), the downwards distance would accumulate and deal a large amount of fall damage on touchdown.
This commit changes fall distance measurement to correctly account for upwards movement.
A better way of measuring fall distance would simply be to record the highest Y coordinate reached while in the air, and then measure the distance between that and the point of contact when landing. This would also remove the need to constantly update the fallDistance field. However, this would involve a BC break and will therefore have to wait until PM4.
This reverts commit b172c93e45e60ee3dbfd8a2efbede60b894673a3.
I made a significant mistake with this change: the scaling factor of
batch-by-chunk is O(nSendBytes), while the scaling factor of sending
directly to players is O(nSendBytes * nActivePlayers). It seems like the
real problem is that this system isn't getting enough usage.
While it does reduce compression efficiency in some cases, it falls back
to letting the sessions do individual compression when the amount of
data is less than 256 bytes anyway (compression-threshold in
pocketmine.yml).
My motivation for scrapping this system was to reduce the broadcast
system's complexity to make it easier to thread the living shit out of
compression, but it seems like this change was a step in the wrong
direction for performance.
A few steps can be taken to improve the usefulness of this system (and
also improve output bandwidth):
- Make general entity updates use this system. Movement and velocity
already kinda used this system, but crucially, players did not,
because we couldn't prevent the player from receiving its own movement
updates if we did that, which caused all kinds of problems.
- Therefore, we need to reintroduce static "self" entity IDs, like we
had in the shoghi days when entity ID 0 referred to the "self"
player.
- However, since entity ID 0 has a variety of interesting bugs since
it usually refers to "no entity" in MCPE, it would be better to use
1 (or 2, like MiNET does).
- The fixed ID used should be close to zero to maximize varint
encoding efficiency.
- We assumed that changes to player's position and velocity would be
ignored by the client. This assumption depends on the client and
could be broken at any time, so it's best not to rely on it.
- Make block updates use this system (when chunk updates are not sent).
Currently, block updates use a separate mechanism which creates a
second batch for every active chunk, which wastes CPU, and decreases
bandwidth efficiency on multiple fronts (reduced compression
efficiency, more cross-thread interactions, more bytes wasted on
RakNet packet headers).
it's not clear what this was actually supposed to represent, but it actually reports VmSize, which is already reported by "Total virtual memory". This line confuses users and is misleading.
player knockback silently depended on the chunk packet broadcast system sending the player's motion back to itself, which broadcasting to hasSpawned will not do.
On larger packets, this worsens compression ratio by 1-2%, but reduces CPU load due to compression by 15-20%. Higher levels than 6 are far more expensive for diminishing returns.
Level 5 produces a further 25-30% CPU reduction, but increases bandwidth usage by 20-25%, so 6 is the sweet spot.
Grouping packets to batch by chunk instead of by player reduces bandwidth efficiency, because the number of active chunks is almost always larger than the number of active players.
With the old mechanism, a batch of packets for each active chunk (which could be dozens, or hundreds... or thousands) would be created once, and then sent to many players. This makes the compression load factor O(nActiveChunks). Broadcasting directly is simpler and changes the load factor to O(nActivePlayers), which usually means a smaller number of larger batches are created, achieving better compression ratios for approximately the same cost (the same amount of data is being compressed, just in a different way).
This change was made after exploring turning this into a permission. It occurred to me that this feature is entirely superfluous because it's non-vanilla, can be done by plugins, and is usually considered as a bug. In addition, disabling this behaviour required third party code just for this one thing because it was not able to be managed by a permissions plugin.
Instead, it's better to produce a plugin which implements this behaviour if it's desired, by making use of SignChangeEvent and PlayerChatEvent/PlayerCommandPreprocessEvent.
close#3856, close#2288