Dylan K. Taylor ed021d193d
BlockTranslator: cut memory usage in half
this was achieved by storing binary representations of the blockstates, rather than the original BlockStateData.

Due to the insane object:data ratio of Tag objects (40:1 for ByteTag for example), modestly sized NBT can explode in memory footprint. This has been previously seen with the absurd 25 MB footprint on file load.
Previously, I attempted to mitigate this by deduplicating tag objects, but this was mitigating a symptom rather than addressing the cause.

We don't actually need to keep the NBT around in memory, since we don't actually use it for anything other than matching blockstates. In this case, we can allow the code to be possibly a little slower, since the lookup is anyway slow and the result will be cached.
In fact, using encoded ordered states as hash keys significantly improves the speed of lookups for stuff like walls, which have many thousands of states.

We keep around generateStateData(), since it's still possible we may need the BlockStateData associated, and it can be easily reconstructed from the binary-encoded representation in BlockStateDictionaryEntry.
2023-05-03 23:11:00 +01:00
..