The following callbacks can now be registered in timings, to allow threads to be notified of these events:
- Turning on/off (`TimingsHandler::getToggleCallbacks()->add(...)`)
- Reset (`TimingsHandler::getReloadCallbacks()->add(...)`)
- Collect (`TimingsHandler::getCollectCallbacks()->add(...)`)
Collect callbacks must return `list<Promise>`. The promises must be `resolve()`d with `list<string>` of printed timings records, as returned by `TimingsHandler::printCurrentThreadRecords()`. It's recommended to use 1 promise per thread.
A timings report will be produced once all promises have been resolved.
This system is used internally to collect timings for async tasks (closes#6166).
For timings viewer developers:
Timings format version has been bumped to 3 to accommodate this change. Timings groups should now include a `ThreadId` at the end of timings group names to ensure that their record IDs are segregated correctly, as they could otherwise conflict between threads. The main thread is not required to specify a thread ID. See pmmp/timings@13cefa6279 for implementation examples.
New PHPStan error is caused by phpstan/phpstan#10924
closes#5854
Cancelling task runs doesn't make any sense.
- It breaks sequential task execution - later tasks might depend on state from earlier tasks
- It doesn't actually cancel the task - at best, it prevents it from running, but cannot interrupt it (though interrupting a task does not make sense either)
We don't use this "feature" in the core anymore since 22b5e5db5e822ac94ed3978ea75bbadcfa8e7f4f, as this was causing unexpected behaviour for plugins anyway, along with the occasional shutdown crash due to inconsistent worker states.
this is no longer a concern with pmmpthread + PHP 8.1 and up. The behaviour that caused statics to be inherited was caused by bugs in PHP 8.0 and below, which have now been fixed.
Since task execution depends on tasks executing sequentially on a particular worker in some cases (e.g. PopulationTask must be preceded by GeneratorRegisterTask), it doesn't make sense to continue task execution if an error occurs.
Moreover, a task crashing may render the whole server unstable, as it leaves the server in an undefined state. This is the same kind of problem we fixed with scheduled tasks in PM3.
In versions past, pthreads was unreliable enough that random tasks would crash without an obvious reason, forcing us to accommodate this. I still don't know the origin or frequency of said issues, but I think it's time to rip the band-aid off and solve these problems for real.
this is only possible since pthreads 5.1 and pmmpthread
the performance cost of this one ThreadSafeArray allocation is 30% of the total cost of allocating an AsyncTask object on Windows, which is enormous.
In past versions we couldn't lazily initialize it, because the object might get destroyed before the main thread had a chance to dereference it, leading to a crash when collecting completed tasks. This is no longer an issue thanks to object rescue behaviour implemented in pthreads 5.1.
I think this is probably OK in terms of thread-safety, as only one thread writes the property.
this is now supported thanks to the object rescue feature implemented in pthreads 5.1, making returning of thread-safe values from async tasks possible.
This needs to be explicitly supported, since otherwise it will attempt to serialize them, which isn't supported anymore.
I previously avoided this due to being unsure of the effects; however, it's clear that we already use typed properties on Threaded things in other places anyway, and the only known issues are with uninit properties, and arrays.
regardless of how long an async task takes to run, it will take a multiple of 50ms to get the result processed. This delay causes issues in some cases for stuff like generation, which causes locking of adjacent chunks, and async packet compression, which experiences elevated latency because of this problem.
This is not an ideal solution for packet compression since it will cause the sleeper handler to get hammered, but since it's already getting hammered by every packet from RakLib, I don't think that's a big problem.