Commit Graph

23 Commits

Author SHA1 Message Date
e21d49c980 [commands] Only clean semaphore when there are no waiters 2020-01-21 19:50:37 -05:00
1a7b838d2a [commands] Refactor BucketType to not repeat in other places in code 2020-01-21 03:30:56 -05:00
bf84c63396 [commands] Add max_concurrency decorator 2020-01-21 03:26:41 -05:00
6071607176 Bump copyright year to 2020
Closes #2510
2020-01-19 20:03:00 -05:00
c7d3ebb400 [commands] Add role cooldown bucket 2019-08-11 18:44:16 -04:00
991140eebe Replace Enum with an internal one for significant speed improvements.
This has been a massive pain point for me personally due to the poor
design of the Enum class leading to the common use cases used in the
library being significantly slow. Since this Enum is not public facing
in terms of *creation*, I can only implement the APIs that are used
when *accessing* them.

This Enum is a drop-in replacement to the pre-existing enum.Enum class
except it comes with significant speed-ups. Since this is a lot to go
over, I will let the numbers speak for themselves:

In [4]: %timeit enums.try_enum(enums.Status, 'offline')
263 ns ± 34.3 ns per loop (7 runs, 1000000 loops each)
In [5]: %timeit NeoStatus.try_value('offline')
134 ns ± 0.859 ns per loop (7 runs, 10000000 loops each)

In [6]: %timeit enums.Status.offline
116 ns ± 0.378 ns per loop (7 runs, 10000000 loops each)
In [7]: %timeit NeoStatus.offline
31.6 ns ± 0.327 ns per loop (7 runs, 10000000 loops each)

In [8]: %timeit enums.Status.offline.value
382 ns ± 15.2 ns per loop (7 runs, 1000000 loops each)
In [9]: %timeit NeoStatus.offline.value
65.5 ns ± 0.953 ns per loop (7 runs, 10000000 loops each)

In [10]: %timeit str(enums.Status.offline)
630 ns ± 14.8 ns per loop (7 runs, 1000000 loops each)
In [11]: %timeit str(NeoStatus.offline)
253 ns ± 3.53 ns per loop (7 runs, 1000000 loops each)

In [12]: %timeit enums.Status('offline')
697 ns ± 8.42 ns per loop (7 runs, 1000000 loops each)
In [13]: %timeit NeoStatus('offline')
182 ns ± 1.83 ns per loop (7 runs, 10000000 loops each)
2019-06-09 00:06:34 -04:00
6dcd68b8d7 [commands] Allow passing current to more cooldown mapping methods.
Also adds a CooldownMapping.update_rate_limit helper function.
2019-04-24 23:26:33 -04:00
919dbcafb3 Consistent use of __all__ to prevent merge conflicts. 2019-04-20 17:20:58 -04:00
ec7a701ceb [commands] Allow passing reference time to update_rate_limit 2019-04-14 16:57:47 -04:00
9827d6eeaf [commands] Fix issue with decorator order with checks and cooldowns
Now they're just explicitly copied.
2019-02-23 07:41:25 -05:00
9656a21ebe Bumped copyright years to 2019. 2019-01-28 22:22:50 -05:00
5a585ebf20 Add channel category cooldown bucket type 2018-11-24 22:51:18 -05:00
c8b49d37be [lint] Fix incorrect and inconsistent whitespace
Adjust whitespace to be consistent with the rest of the library.
2018-08-22 21:43:53 -04:00
00a14a46f3 [commands] Added BucketType.members for cooldowns 2018-08-22 21:06:08 -04:00
9b4a2dc7cb [commands] Minor speed-up for the BucketType.guild case.
None case:
344ns ± 24.4ns -> 49.9ns ± 1.39ns

Valid case:
128ns ± 2.76ns -> 42.7ns ± 0.459ns
2017-10-08 07:57:58 -04:00
1bb7b6ff2d [commands] Make CooldownMapping.get_bucket take Message instead.
Requiring a full blown Context might be a bit overkill considering
we only use a single attribute from it.
2017-10-08 07:52:56 -04:00
bae6f80327 [commands] Split Cooldown state processing to two different functions.
This allows us to check if we are rate limited without
creating a new cool-down window for the command.
2017-10-03 03:57:06 -04:00
63fcfa6d02 [commands] Add CooldownMapping.from_cooldown factory classmethod. 2017-08-27 16:59:04 -04:00
ff9f5749e1 Update copyright year to 2017. 2017-01-20 23:19:19 -05:00
d1d54a468a Rename Server to Guild everywhere. 2017-01-03 09:51:54 -05:00
e4b16851bf Slots use tuples instead now. 2017-01-03 09:51:50 -05:00
b7ffbca0c7 [commands] Added a method to reset command cooldown. 2016-09-08 07:02:33 -04:00
cd0de57d13 [commands] Implement a command cooldown system.
The way the command cooldown works is using a windowed way of doing it.
That is, if we have a cooldown of 2 commands every 30 seconds then if we
do a single command, we have 30 seconds to do the second command or else
we will get rate limited. This more or less matches the common
expectations on how cooldowns should be.

These cooldowns can be bucketed up to a single dimension of depth for
a per-user, per-guild, or per-channel basis. Of course, a global bucket
is also provided. These cannot be mixed, e.g. no per-channel per-user
cooldowns.

When a command cooldown is triggered, the error handlers will receive a
an exception of type CommandOnCooldown with proper information regarding
the cooldown such as retry_after and the bucket information itself.
2016-07-22 18:05:38 -04:00