wf-fragment-adapter is an adapter for
waveform-data that allows you to
render only a subset of the full graph, and treat it as the full graph.
Let's say you have a sound file that is three hours long, and you're only interested in rendering the graph representing 01:00:00 to 01:30:00, you'd do this:
var WaveformData = ;var createFragmentAdapter = createFragmentAdapter;var start = 3600; // One hourvar duration = 1800; // Half an hourvar data = wfArrayBuffer ;
WaveformData provides a mechanism for resampling and offsetting data to focus
on parts of the graph, so why write a separate adapter for doing the same thing?
If you for some reason want to enforce a permanent view on parts of the graph,
using WaveformData's offsets and resampling are not practical, because you must
manually apply them whenever you make changes in the current view. Couple this
with the fact that you cannot nest offsets or segments, and you have a solution
where information about the fragment spreads across your entire app.
When rendering multiple UI components based on the same waveform data, using this adapter to enforce the fragment being viewed reduces overall complexity and makes sure you only need to know about the actual start and offset in one place.
As an example, this adapter was originally written to handle the situation where the sound file to be played/visualized contained start/end padding - information that was not relevant in play mode (only while editing). Removing the pads with an adapter spared us from spreading the information about start and duration throughout the system.