Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions src/valkey-benchmark.c
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,10 @@
#define MAX_LATENCY_PRECISION 4
#define MAX_THREADS 500
#define CLUSTER_SLOTS 16384
#define CONFIG_LATENCY_HISTOGRAM_MIN_VALUE 10L /* >= 10 usecs */
#define CONFIG_LATENCY_HISTOGRAM_MAX_VALUE 3000000L /* <= 3 secs(us precision) */
#define CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE 3000000L /* <= 3 secs(us precision) */
#define SHOW_THROUGHPUT_INTERVAL 250 /* 250ms */
#define CONFIG_LATENCY_HISTOGRAM_MIN_VALUE 10L /* >= 10 usecs */
#define CONFIG_LATENCY_HISTOGRAM_MAX_VALUE 60000000L /* <= 60 secs(us precision) */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description says requests upto 30s (which is also quite large). Do you see requests that can take upto 60s?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For normal commands and/or in steady state probably not.
I'm using it to test commands that I expect to take a while (like scripts reading huge keys) and in edge-case states (like during load test or slot migrations).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any side affects of increasing this limit? I am thinking what was the reason that it was kept at 3s for so long.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the "normal" use case of the benchmark tool is to test fast commands and reach high throughput, and my use case is not a classic usage and thus wasn't necessarily supported properly.
As far as side effects, the max value increase means there will be more buckets in the histogram, but I think it's negligible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How many more bucket? What's the size of the buckets? Power of two?

#define CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE 60000000L /* <= 60 secs(us precision) */
#define SHOW_THROUGHPUT_INTERVAL 250 /* 250ms */

#define CLIENT_GET_EVENTLOOP(c) (c->thread_id >= 0 ? config.threads[c->thread_id]->el : config.el)

Expand Down
Loading