React-Concurrent-Cache
React-Concurrent-Cache looks to leverage the React Suspense API in order to create a configurable hook that can be used to build a clean, easy to use interface to fetch data and handle loading and error states.
Using React-Concurrent-Cache
Quickstart
npm install react-concurrent-cache --save
or with yarn
yarn add react-concurrent-cache
Basic usage
To build a cached resource, wrap a function in cache
Example:
import { CacheStore } from 'react-concurrent-cache'
const cacheStore = new CacheStore()
const resource = cacheStore.cache((path, options = {}) => {
return fetch(path, options).then((result) => {
return result.json()
})
})
Then to use the cache, use the get
function from the cached resource.
Example:
const response = resource.get('my.url') // returns Promise<JSON response>
const response = resource.get('my.url') // returns Promise<JSON response> but does not make another request
// wait for promise to resolve
const response = resource.get('my.url') // returns the JSON response (not a promise) and does not make a second request
To leverage this in React
(using React.Suspense
), then use suspendFor
use this resource.
Example:
import React from 'react'
import { cache, suspendFor } from 'react-concurrent-cache'
const cacheStore = new CacheStore()
const resource = cacheStore.cache((path, options = {}) => {
return fetch(path, options).then((result) => {
return result.json()
})
})
function WidgetIndexPage() {
return (
<React.Suspense fallback={<Loading />}>
<WidgetList />
</React.Suspense>
)
}
function WidgetList() {
const widgets = suspendFor(resource.get('my.api/widgets'))
return (
<div>
{widgets.map((widget) => (
<Widget key={widget.id} widget={widget} />
))}
</div>
)
}
While the widgets
endpoint is loading, React
will suspend and when it has finished resolving, it will return the value, removing any need for other loading state management.
NOTE: It is important to define each resource once because the reference of the function is used to cache your responses.
Preloading data
When you want to load multiple requests for a page, you want to start requesting the data as soon as possible, but don't want to prevent displaying things to the users for all the data to load. It is good practice to set up preload to load data at the highest place where you know it will be used, but it doesn't need to be referenced in the code at that level. To do this, use get
on your resource, but don't wrap it in suspendFor
until you want it to suspend:
import resource from './resource'
function Page() {
resource.get('path/to/episodes') // non-blocking
resource.get('path/to/characters') // non-blocking
return <PageToShowAllThatDataSomewhere>
}
In this scenario, both episodes and characters will load on loading of Page, but references within will still wait for the loading to complete. No waterfalls!
Clearing cache results / Refreshing Data
Sometimes you want to make a request clear its cache so that the request will go out again. Due to the nature of the library, if you clear the cache of a cached record, if it is referenced, it will act like it did on its first render and refetch the request, so for this reason, clearing and refreshing are one and the same. Here's how you do it:
import resource from './resource'
function Page() {
return (
<div>
<button onClick={() => clearAll()}>Clear/Refresh All Requests</button>
<button onClick={() => clear('/episodes')}>Clear/Refresh episodes</button>
</div>
)
}
Ingesting data
Sometimes you might have data that is loaded in the format of a request from the API, but is loaded in a different way, for instance, through server rendered props. This will add data to the cache for the given arguments so that calling get
for the data will reference that data.
import resource from './resource'
function Component({ episodeData, id }) {
resource.set(`path/to/episode/${id}`, episodeData)
return <Episode episodeId={id} />
}
function Episode({ episodeId }) {
const episode = suspendFor(resource.get(`path/to/episode/${id}`))
// episode === episodeData and does not make request to resolver
return <EpisodeDisplay episode={episode} />
}
Handling Errors
React can use Error Boundaries to handle errors from your requests so that you don't have to build special functionality into your resolvers. When doing so, it is important to clear out failed requests because they will be stored in the cache.
Examples
-
API resource based example - This demonstrates an example of a simple api resource that has a resolver that is simply a wrapper around
fetch
. In it, you can see how loading states are handled, as well as how caching keeps data in store so that extra requests don't need to be made when navigating back to previous pages. Also, you can see that characters that have been loaded from other episode views use the cached values rather than making new requests. In addition, there is a button to test clearing the cache and seeing how that forces the reload. The last button gives the example of an error boundary and how it can be used to handle scenarios where an API request fails (NOTE: there is an extra layer that is shown on errors from CodeSandbox, but you can close it to see the error). - RESTful resource based example - Very similar to the approach above, but instead of using just the wrapper around fetch, it shows how the implementation can be easier/different if you define extra resolvers in the ResourceProvider
-
Preloading vs not Preloading example - Its important to think about how your data will load on your page, and although
react-data-cache
provides some simple ways of handling that data, its important that you load your data in the right place. This is an example that shows the difference in load times between a Waterfall (not using preload) approach and
API
How does this approach work with the direction of React 18.3 and beyond
React
is building use
and cache
in the experimental builds that work very similar to the ways that suspendFor
and cache
work in this library. The hope is that once those become stabilized, most of this library will be redundant and you can just swap things out. This library does, however, build a much more robust cache
setup that allows you to have more control over the cache (like clearing/manually setting results)