Dispatcher
In order to provide advanced features and fetching modes like retries, deduplication, queueing, canceling etc. It can catalog the requests providing unique IDs, tracking the lifecycle and triggering the requests at the right time and in the right way.
Every request in the dispatcher is stored in the queue structure. This allows us to perform many operations (e.g. stopping, pausing, or starting) on the dispatched requests. However, this does not mean that all requests will be sent individually; there are multiple dispatching modes available, which you can read about in the dispatching modes section.
- Orchestrates request flow and lifecycle
- Can retry, deduplicate, manage offline mode and more
- Allow to pause and resume requests or queue groups
- Bridges adapter, cache and managers functionality
How it works
Each request triggered with send()
, unlike the exec()
method, goes through the entire lifecycle in the library.
First, we check the execution mode—whether it is concurrent, queued, cancelable, or deduplicated. This determines how we
handle the current and previous requests. Next, we add the request to a queue, which is a group of requests with the
same queryKey
. Once a request is picked from the queue to be triggered, we assign it a requestId
and execute it in
the adapter of our choice. From this point, we trigger the entire lifecycle, including retries, offline handling, and
request and response events. Finally, we remove the request from the queue (whether successful or not) and pass the data
to the cache for state handling.
Two instances
There are two dispatcher instances in the Client
class: one for querying and one for mutation requests. This design
provides greater flexibility for configuration. This way, you can configure data mutation and data querying behavior
separately. For example, you can set different default settings for each dispatcher.
QueryKey
The queryKey
is a unique identifier for the request queues. It is used in propagation and reception of request events
on a given queue and in management of incoming and outgoing requests. Each detected unique queryKey
creates isolated
queue array. By default, keys are auto-generated from the request's endpoint and URL parameters, but you can still add
the key manually when creating the Request
or using a generic generator.
const getUser = client.createRequest()({
method: "GET",
endpoint: "/users/:userId",
});
const queryKey = getUser.queryKey; // "/users/:userId"
const queryKeyWithParams = getUser.setParams({ userId: 1 }).queryKey; // "/users/1"
const queryKeyWithQueryParams = getUser.setQueryParams({ page: 1 }).queryKey; // "/users/:userId"
Custom queryKey
You can also set a custom query key:
import { client } from "./api";
const getUser = client.createRequest()({
method: "GET",
endpoint: "/users/:userId",
queryKey: "CUSTOM_QUERY_KEY",
});
console.log(getUser.queryKey); // "CUSTOM_QUERY_KEY"
Generic queryKey
You can also set a generic query key:
import { client } from "./api";
client.setQueryKeyMapper((request) => {
if (request.requestOptions.endpoint === "/users/:userId") {
return `CUSTOM_QUERY_KEY_${request.params?.userId || "unknown"}`;
}
});
const getUser = client.createRequest()({
method: "GET",
endpoint: "/users/:userId",
});
console.log(getUser.setParams({ userId: 1 }).queryKey); // "CUSTOM_QUERY_KEY_1"
RequestId
The requestId
is autogenerated by the dispatchers when a request is executed. It is unique within a single
Client
instance, but we do not guarantee its uniqueness between different Client
instances. It's used for precise
communication with the dispatcher, for example, when listening for particular request events or removing a request.
Available methods
Name | Description |
---|---|
| Stop particular request |
| Stop request queue and cancel all started requests - those will be treated like not started |
| Start particular request |
| Start request handling by queryKey |
| Set new queue storage value |
| Request can run for some time, once it's done, we have to check if it's successful or if it was aborted It can be different once the previous call was set as cancelled and removed from queue before this request got resolved |
| Pause request queue, but do not cancel already started requests |
| |
| Add request count to the queryKey |
| Get the value based on the currently running requests |
| Check if request is currently processing |
| Get currently running requests |
| Get running request by id |
| Return request from queue state |
| Return all |
| Get count of requests from the same queryKey |
| Return queue state object |
| Get value of the active queue status based on the stopped status |
| Get currently running requests from all queryKeys |
| Method used to flush the queue requests |
| Flush all available requests from all queues |
| Delete all started requests, but do NOT clear it from queue and do NOT cancel them |
| Delete request by id, but do NOT clear it from queue and do NOT cancel them |
| Delete from the storage and cancel request |
| Create storage element from request |
| Clear requests from queue cache |
| Clear all running requests and storage |
| Cancel all started requests, but do NOT remove it from main storage |
| Cancel started request, but do NOT remove it from main storage |
| Add request to the running requests list |
| Add new element to storage |
| Add request to the dispatcher handler |
Features
Here is a list of features that the dispatcher provides:
-
Retrying
One of the features that the dispatcher provides is retrying failed requests. It can retry requests until a successful result is obtained or the retry limit is reached.
Below is an example of how to set the request to retry and specify the time between attempts. The response promise will be resolved on success or after the last retry attempt.
const getUser = client.createRequest()({
method: "GET",
endpoint: "/users/:userId",
retry: 5,
retryTime: 3000,
});
const response = await getUser.send();
-
Queues
Dispatchers store requests in queues. Thanks to this, we have more and better control over the request flow. This
flexibility allows us to stop()
, pause()
, and start()
request execution. We can apply these actions to a single
request or to entire queues (a new queue is created for each unique queryKey
).
stop()
and pause()
?The main difference lies in how they handle the currently executing request:
stop()
- Cancels the currently executing request, along with all other requests in the queue.pause()
- Allows the currently executing request to finish, but prevents any further requests from starting.
In both scenarios, the requests are not removed from the queue and can be resumed later.
Start
When a queue is stopped, you can use start()
to resume processing requests.
Pause
To pause requests, just use the pause()
method on the Dispatcher.
// To pause the queue
client.fetchDispatcher.pause("queryKey");
You cannot pause()
individual requests; they can only be stopped.
Stop
To stop requests use the stop()
method to stop the queue. It will cancel the in-progress request and hold all others.
// To stop the queue
client.fetchDispatcher.stop("queryKey");
You can also stop an individual request with the stopRequest()
method. It will cancel the request and remove it from
the queue.
// To stop individual request
client.fetchDispatcher.stopRequest("queryKey", "requestId");
-
Offline
When the connection is lost, the queue is stopped, and any failed or interrupted requests will wait for the connection to be restored before being retried. This prevents data loss and allows us to leverage caching abilities.
To disable offline mode, you can set the request's offline
option to false
.
const newRequest = client.createClient()({
endpoint: "/request"
offline: false
})
Modes
Each dispatcher queue can operate in several modes, which can be selected via request properties.
Concurrent
In this mode, requests are not limited and can be executed concurrently. This is the default mode for all requests.
This mode is active when the queued
property on a request is set to false
, which is the default.
This is the default mode for request execution. Use it to send multiple requests simultaneously and receive responses in parallel.
Deduplication
Deduplication optimizes data exchange with the server. If we ask the server for the same data twice at the same time with different requests, this mode will perform one call and propagate the response to both sources.
Enable this mode by setting the request deduplication
prop to true.
Imagine different components making the same request. Instead of sending the same request multiple times, you can group them into one and listen to its response. This way, you can avoid over-fetching and improve your application's performance.
Cancelable
Cancelable mode avoids race conditions when multiple requests are sent, but only the result of the last one is desired. This mode is ideal for paginated lists of data, where only a single page needs to be shown, even if the user triggers new requests with rapidly changing pagination.
Enable this mode by setting the request cancelable
prop to true.
Cancelable mode is very useful for paginated data. You can cancel previous requests (for example, when pages are switched dynamically) and send only the current one, thus avoiding over-fetching.
Queued
This mode is ideal for sending requests sequentially. It allows you to combine requests into an ordered queue that will
be processed one item at a time. In this mode, you can start
, stop
, or pause
the entire queue, request or whole
processing.
Enable this mode by setting the request queued
prop to true.
This mode is ideal for transferring large amounts of data. It mitigates the impact of network issues by processing requests sequentially, which prevents multiple requests from failing if the connection is lost. Additionally, it improves user experience by enabling parts of an application to become available as each data transfer completes, rather than making the user wait for all transfers to finish.