
A schema-based DynamoDB modeling tool, high-level API, and type-generator
built to supercharge single-table designs!⚡
Marshalling ✅ Validation ✅ Where-style query API ✅ and more.
Fully-typed support for ESM and CommonJS
- ✨ Key Features
- 🚀 Getting Started
- 🧙♂️ Data IO Order of Operations
- 📖 Schema API
- ⚙️ Model Schema Options
- 📦 Batch Requests
- ❓ FAQ
- 🤝 Contributing
- 📝 License
- 💬 Contact
- Easy-to-use declarative API for managing DDB tables, connections, and models
- Auto-generated typings for model items
- Custom attribute aliases for each model
- Create attributes/properties from combinations of other attributes/properties
- Type checking and conversions for all DDB attribute types
- Validation checks for individual properties and entire objects
- Where-style query API
- Default values
- Property-level get/set modifiers
- Schema-level get/set modifiers
- Required/nullable property assertions
- Easy access to a streamlined DynamoDB client (more info here)
- Automatic retries for batch operations using exponential backoff (more info here)
- Support for transactions — group up to 100 operations into a single atomic transaction!
-
Install the package:
npm install @nerdware/ddb-single-table
-
Create your table:
import { Table } from "@nerdware/ddb-single-table"; // OR const { Table } = require("@nerdware/ddb-single-table"); export const myTable = new Table({ tableName: "my-table-name", // The `tableKeysSchema` includes all table and index keys: tableKeysSchema: { pk: { type: "string", // keys can be "string", "number", or "Buffer" required: true, isHashKey: true, }, sk: { type: "string", required: true, isRangeKey: true, index: { // This index allows queries using "sk" as the hash key name: "Overloaded_SK_GSI", rangeKey: "data", global: true, project: true, // project all attributes throughput: { read: 5, write: 5 }, }, }, data: { type: "string", required: true, index: { // This index allows queries using "data" as the hash key name: "Overloaded_Data_GSI", rangeKey: "sk", global: true, project: true, // project all attributes throughput: { read: 5, write: 5 }, }, }, }, // You can provide your own DDB client instance or configs for a new one: ddbClient: { // This example shows how to connect to dynamodb-local: region: "local", endpoint: "http://localhost:8000", // All AWS SDK client auth methods are supported. // Since this example is using dynamodb-local, we simply use // hard-coded "local" credentials, but for production you would // obviously want to use a more secure method like an IAM role. credentials: { accessKeyId: "local", secretAccessKey: "local", }, }, });
-
Create a model, and generate item-typings from its schema:
import { myTable } from "./path/to/myTable.ts"; import { isValid } from "./path/to/some/validators.ts"; import type { ItemTypeFromSchema } from "@nerdware/ddb-single-table"; const UserModel = myTable.createModel({ pk: { type: "string", alias: "id", // <-- Each Model can have custom aliases for keys default: ({ createdAt }: { createdAt: Date }) => { return `USER#${createdAt.getTime()}` }, validate: (id: string) => /^USER#\d{10,}$/.test(id), required: true, }, sk: { type: "string", default: (userItem: { pk: string }) => { // Functional defaults are called with the entire UNALIASED item as the first arg. return `#DATA#${userItem.pk}` // <-- Not userItem.id }, validate: (sk: string) => /^#DATA#USER#\d{10,}$/.test(sk) required: true, }, data: { type: "string", alias: "email", validate: (value: string) => isValid.email(value), required: true, }, profile: { type: "map", // Nested attributes ftw! schema: { displayName: { type: "string", required: true }, businessName: { type: "string", nullable: true }, photoUrl: { type: "string", nullable: true }, favoriteFood: { type: "string", nullable: true }, // You can nest attributes up to the DynamoDB max depth of 32 }, }, checklist: { type: "array", required: false, schema: [ { type: "map", schema: { id: { // Nested attributes have the same awesome schema capabilities! type: "string", default: (userItem: { sk: string }) => { return `FOO_CHECKLIST_ID#${userItem.sk}#${Date.now()}` }, validate: (id: string) => isValid.checklistID(id), required: true, }, description: { type: "string", required: true }, isCompleted: { type: "boolean", required: true, default: false }, }, }, ], }, /* By default, 'createdAt' and 'updatedAt' attributes are created for each Model (unless explicitly disabled). Here's an example with these attributes explicitly provided: */ createdAt: { type: "Date", required: true, default: () => new Date() }, updatedAt: { type: "Date", required: true, default: () => new Date(), /* transformValue offers powerful hooks which allow you to modify values TO and/or FROM the db. Each attribute can define its own transformValue hooks.*/ transformValue: { toDB: () => new Date(), /* <-- For data traveling TO the db (write ops). transformValue can also include a `fromDB` fn to transform values coming FROM the db. If specified, your `fromDB` transforms are applied for both write and read operations. */ }, }, }); // The `ItemTypeFromSchema` type is a helper type which converts // your schema into a Typescript type for your model's items. export type UserItem = ItemTypeFromSchema<typeof UserModel.schema>;
-
Use your model and generated types:
import { UserModel, type UserItem } from "./path/to/UserModel.ts"; // Create a new user: const newUser = await UserModel.createItem({ email: "human_person@example.com", profile: { displayName: "Guy McHumanPerson", businessName: "Definitely Not a Penguin in a Human Costume, LLC", photoUrl: "s3://my-bucket-name/path/to/human/photo.jpg", favoriteFood: null, }, checklist: [ { description: "Find fish to eat" }, { description: "Return human costume by 5pm" }, ], }); // You can use explicit type annotations, or allow TS to infer types. // For example, the line below yields the same as the above example: // const newUser: UserItem = await UserModel.createItem(...); // The `newUser` is of type `UserItem`, with all keys aliased as specified: const { id, sk, email, profile, checklist, createdAt, updatedAt }: UserItem = { ...newUser, }; // You can also use the model to query for items using `where` syntax: const usersWhoAreDefinitelyHuman = await UserModel.query({ where: { email: { beginsWith: "human_", // <-- All DDB operators are supported! }, }, }); // There are a lot more features I've yet to document, but hopefully // this is enough to get you started! Pull requests are welcome! 🐧
-
Profit! 💰🥳🎉
When any Model method is invoked, it begins a request-response cycle in which DDB-ST applies a series of transformations and validations to ensure that the data conforms to the schema defined for the Model. DDB-ST collectively refers to these transformations and validations as "IO-Actions", and they are categorized into two groups: toDB
and fromDB
. The toDB
actions are applied to Model-method arguments before they're passed off to the underlying AWS SDK, while the fromDB
actions are applied to all values returned from the AWS SDK before they're returned to the caller.
The toDB
and fromDB
flows both have a specific order in which IO-Actions are applied.
[!IMPORTANT] Some Model-methods will skip certain IO-Actions depending on the method's purpose. For example,
Model.updateItem
skips the"Required" Checks
IO-Action, since the method is commonly used to write partial updates to items.
Order | IO-Action | Description | Skipped by Method(s) |
---|---|---|---|
1 | Alias Mapping |
Replaces attribute aliases with attribute names. | |
2 | Set Defaults |
Applies defaults defined in the schema. | updateItem |
3 | Attribute toDB Modifiers |
Runs your transformValue.toDB functions. |
|
4 | Item toDB Modifier |
Runs your transformItem.toDB function. |
updateItem |
5 | Type Checking |
Checks properties for conformance with their "type" . |
|
6 | Attribute Validation |
Validates individual item properties. | |
7 | Item Validation |
Validates an item in its entirety. | updateItem |
8 | Convert JS Types |
Converts JS types into DynamoDB types. | |
9 | "Required" Checks |
Checks for "required" and "nullable" attributes. |
updateItem |
Order | IO-Action | Description |
---|---|---|
1 | Convert JS Types |
Converts DynamoDB types into JS types. |
2 | Attribute fromDB Modifiers |
Runs your transformValue.fromDB functions. |
3 | Item fromDB Modifier |
Runs your transformItem.fromDB function. |
4 | Alias Mapping |
Replaces attribute names with attribute aliases. |
DDB-ST provides a declarative schema API for defining your table and model schemas. The schema is defined as a plain object, with each attribute defined as a key in the object. Each attribute can include any number of configs, which are used to define the attribute's type, validation, default value, and other properties.
There are two kinds of schema in DDB-ST:
- This schema defines the table's hash and range keys, as well as any keys which serve as primary/hash keys for secondary indexes.
- There is only 1 Table-Keys Schema, and the attributes defined within it are shared across all Models.
- If a Model will simply re-use all of the existing configs for an attribute defined in the Table-Keys Schema, then the attribute can be omitted from the Model's schema. In practice, however, it is encouraged to always include such attributes in the Model's schema, as this will make it easier to understand the Model and its schema.
- Each Model has its own schema that may include any number of key and non-key attribute definitions.
- If a Model Schema includes key attributes, those attributes must also be defined in the Table-Keys Schema.
- Each Model Schema may specify its own custom configs for key attributes, including
alias
,default
,validate
, andtransformValue
. Attribute configs that affect a key attribute's type (type
,required
) must match the Table-Keys Schema.
The following schema configs are used to define attributes in your schema:
Config Name | Description | Can Use in Table-Keys Schema? |
Can Use in Model Schema? |
---|---|---|---|
isHashKey |
Indicates whether the attribute is a table hash key. | ✅ | ❌ |
isRangeKey |
Indicates whether the attribute is a table range key. | ✅ | ❌ |
index |
Secondary-index configs defined on the index's hash key. | ✅ | ❌ |
alias |
An optional alias to apply to the attribute. | ✅ | ✅ |
type |
The attribute's type. | ✅ | ✅ |
schema |
An optional schema for nested attributes. | ✅ | ✅ |
oneOf |
An optional array of allowed values for enum attributes. | ✅ | ✅ |
nullable |
Indicates whether the attribute value may be null . |
✅ | ✅ |
required |
Indicates whether the attribute is required. | ✅ | ✅ |
default |
An optional default value to apply. | ✅ | ✅ |
validate |
An optional validation function to apply. | ✅ | ✅ |
transformValue |
An optional dictionary of data transformation hooks. | ✅ | ✅ |
A boolean value which indicates whether the attribute is a table hash key.
const TableKeysSchema = {
pk: {
type: "string",
isHashKey: true,
required: true, // Key attributes must always include `required: true`
},
} as const satisfies TableKeysSchemaType;
A boolean value which indicates whether the attribute is a table range key.
const TableKeysSchema = {
sk: {
type: "string",
isHashKey: true,
required: true, // Key attributes must always include `required: true`
},
// ... other attributes ...
} as const satisfies TableKeysSchemaType;
Secondary index configs, defined within the the attribute config of the index's hash-key.
See type:
SecondaryIndexConfig
const TableKeysSchema = {
fooIndexHashKey: {
type: "string",
required: true, // Key attributes must always include `required: true`
index: {
// The index config must specify a `name` — all other index configs are optional.
name: "FooIndex",
// `rangeKey` defines the attribute to use for the index's range key, if any.
rangeKey: "barIndexRangeKey",
/**
* `global` is a boolean that indicates whether the index is global.
*
* `true` = global index (default)
* `false` = local index
*/
global: true,
/**
* `project` is used to configured the index's projection type.
*
* `true` = project ALL attributes
* `false` = project ONLY the index keys (default)
* `string[]` = project ONLY the specified attributes
*/
project: true,
/**
* `throughput` is used to configured provisioned throughput for the index.
*
* If your table's billing mode is PROVISIONED, this is optional.
* If your table's billing mode is PAY_PER_REQUEST, do not include this.
*/
throughput: {
read: 5, // RCU (Read Capacity Units)
write: 5, // WCU (Write Capacity Units)
},
},
},
// ... other attributes ...
} as const satisfies TableKeysSchemaType;
An optional Model-specific alias to apply to a key attribute. An attribute's alias
serves as its name outside of the database for the Model in which the alias
is defined. For example, a key attribute named "pk"
will always be "pk"
in the database, but if a Model configures the field with an alias of "id"
, then objects returned from the Model's methods will include the field "id"
rather than "pk"
. Similarly, the attribute alias can be used in arguments provided to Model methods.
During write operations, if the object provided to the Model method contains a key matching a schema-defined alias
value, the key is replaced with the attribute's name. For both read and write operations, when data is returned from the database, this key-switch occurs in reverse — any object keys which match an attribute with a defined alias
will be replaced with their respective alias
.
[!IMPORTANT] All of a Model's
alias
values must be unique, or the Model's constructor will throw an error.
The attribute's type. The following type
values are supported:
Attribute Type | DynamoDB Representation | Can use for KEY attributes? |
Can use for NON-KEY attributes? |
---|---|---|---|
"string" |
"S" (String) | ✅ | ✅ |
"number" |
"N" (Number) | ✅ | ✅ |
"Buffer" |
"B" (Binary) | ✅ | ✅ |
"boolean" |
"BOOL" (Boolean) | ❌ | ✅ |
"Date" |
Converted to Unix timestamp (Number) | ❌ | ✅ |
"map" |
"M" (Map) | ❌ | ✅ |
"array" |
"L" (List) | ❌ | ✅ |
"tuple" |
"L" (List) | ❌ | ✅ |
"enum" |
"S" (String) | ❌ | ✅ |
The "map"
, "array"
, and "tuple"
types facilitate nested data structures up to the DynamoDB max depth of 32.
Nested data structures are defined using the schema
attribute config.
The enum
type is used to limit the possible values of a string attribute to a specific set of values using the oneOf
attribute config.
The schema
attribute config is used with "map"
, "array"
, and "tuple"
attributes to define nested data structures. The way schema
is used depends on the attribute's type
:
const UserModelSchema = {
// A map attribute with a nested schema:
myMap: {
type: "map",
schema: {
fooKey: { type: "string", required: true },
anotherField: { type: "string" },
},
},
// An array attribute that simply holds strings:
myArray: {
type: "array",
schema: [{ type: "string" }],
},
// An array attribute with a nested map schema:
myChecklist: {
type: "array",
schema: [
{
type: "map",
schema: {
id: { type: "string", required: true },
description: { type: "string", required: true },
isCompleted: { type: "boolean", required: true, default: false },
},
},
],
},
// A tuple attribute with a nested schema:
coordinates: {
type: "tuple",
schema: [
{ type: "number", required: true }, // latitude
{ type: "number", required: true }, // longitude
],
},
} as const satisfies ModelSchemaType;
The oneOf
attribute config is used with "enum"
attributes to specify allowed values. It is provided as an array of strings which represent the allowed values for the attribute.
For example, the following schema defines an attribute status
which can only be one of the three values: "active"
, "inactive"
, or "pending"
:
const UserModelSchema = {
status: {
type: "enum",
oneOf: ["active", "inactive", "pending"],
},
} as const satisfies ModelSchemaType;
Optional boolean flag indicating whether a value may be null
. Unless this is explicitly true
, an error will be thrown if the attribute value is null
.
[!NOTE]
Default:
false
Optional boolean flag indicating whether a value is required for create-operations. If true
, an error will be thrown if the attribute value is missing or undefined
. Note that this check is performed after all other schema-defined transformations and validations have been applied.
[!NOTE]
Default:
false
for non-key attributes (keys are always required)
Optional default value to apply. This can be configured as either a straight-forward primitive value, or a function which returns a default value. If one key is derived from another, this default is also applied to Where
-query args and other related APIs.
With the exception of
updateItem
calls, an attribute's value is set to thisdefault
if the initial value provided to the Model method isundefined
ornull
.
-
- The primitive's type must match the attribute's
type
, otherwise the Model's constructor will throw an error.
- The primitive's type must match the attribute's
-
- The function is called with the entire item-object provided to the Model method with UNALIASED keys, and the attribute value is set to the function's returned value.
- This package does not validate functional
default
s.
Bear in mind that key and index attributes are always processed before all other attributes, thereby making them available to use in default
functions for other attributes. For example, in the below LibraryModelSchema
, each authorID
is generated using the unaliasedPK
plus a UUID:
const LibraryModelSchema = {
unaliasedPK: {
isHashKey: true,
type: "string",
default: () => makeLibraryID(),
alias: "libraryID" /* <-- NOTE: This alias will NOT be available
in the below authorID `default` function. */,
},
authors: {
type: "array",
schema: [
{
type: "map",
schema: {
authorID: {
type: "string",
default: (entireLibraryItem) => {
// unaliasedPK is available here because it is a key attribute!
return entireLibraryItem.unaliasedPK + getUUID();
},
},
},
},
],
},
};
The validate
attribute config is used to specify a custom validation function for an attribute. The function is called with the attribute's value as its first argument, and it should return true
if the value is valid, or false
if it is not.
The transformValue
attribute config is an optional dictionary of toDB
and/or fromDB
transformation functions which are called with the attribute's value. transformValue
configs can include both toDB
and fromDB
functions, or just one of them.
transformValue
functions must return either a value of the attribute's configured "type"
, or null
if the attribute is not required
(null
values for required
attributes will cause a validation error to be thrown). If the attribute is required, the function must return a value of the attribute's configured "type"
. Returning undefined
either explicitly or implicitly will always be ignored, i.e., the value will remain as it was before the transformValue
function was called.
The following options are available when creating a Model:
[!NOTE]
Default:
false
This boolean indicates whether the Model should automatically add createdAt
and updatedAt
attributes to the Model schema. When enabled, timestamp fields are added before any default
functions defined in your schema are called, so your default
functions can access the timestamp values for use cases like UUID generation.
[!NOTE]
Default:
false
Whether the Model allows items to include properties which aren't defined in its schema on create/upsert operations. This may also be set to an array of strings to only allow certain attributes — this can be useful if the Model includes a transformItem
function which adds properties to the item.
Like its transformValue
counterpart, the transformItem
config is an optional dictionary of toDB
and/or fromDB
transformation functions which are called with an entire item-object, rather than an individual attribute. transformItem
configs can include both toDB
and fromDB
functions, or just one of them. transformItem
functions must return a "complete" item that effectively replaces the original.
Like its validate
counterpart, the validateItem
config is used for validation, but it is called with an entire item-object rather than an individual attribute. The validateItem
function should return true
if the item is valid, or false
if it is not.
DDB-ST models provide a high-level API for batching CRUD operations that handles the heavy lifting for you, while also providing the flexibility to customize the behavior of each operation:
-
batchGetItems
— Retrieves multiple items from the database in a single request. -
batchUpsertItems
— Creates or updates multiple items in the database in a single request. -
batchDeleteItems
— Deletes multiple items from the database in a single request. -
batchUpsertAndDeleteItems
— Creates, updates, or deletes multiple items in the database in a single request.
As recommended by AWS, DDB-ST will automatically retry batch operations which either return unprocessed requests (e.g., UnprocessedKeys
for BatchGetItem
), or result in a retryable error. In adherence to AWS best practices, all retries are implemented using a configurable exponential backoff strategy (described below).
-
First request: no delay
-
Second request: delay
initialDelay
milliseconds (default: 100) -
All subsequent request delays are equal to the previous delay multiplied by the
timeMultiplier
(default: 2), until either:- The
maxRetries
limit is reached (default: 10), or - The
maxDelay
limit is reached (default: 3500, or 3.5 seconds)
Ergo, the base
delay
calculation can be summarized as follows:delay in milliseconds = initialDelay * timeMultiplier^attemptNumber
If
useJitter
is true (default: false), thedelay
is randomized by applying the following to the basedelay
:Math.round( Math.random() * delay )
Note that the determination as to whether the delay exceeds the
maxDelay
is made BEFORE the jitter is applied. - The
A: Single-table design patterns can yield both greater IO and cost performance, while also reducing the amount of infrastructure that needs to be provisioned and maintained. For a technical breakdown as to why this is the case, check out this fantastic presentation from one of the designers of DynamoDB speaking at AWS re:Invent.
A: DDB-ST provides a single streamlined abstraction over both the document and vanilla DynamoDB clients:
- CRUD actions use the document client to provide built-in marshalling/unmarshalling of DDB-attribute objects.
- Utility actions like DescribeTable which aren't included in the document client use the vanilla client.
- To ensure client resources like socket connections are cleaned up, a listener is attached to the process "exit" event which calls the vanilla client's
destroy()
method. Note that although the document client does expose the same method, calling it on the doc-client results in a no-op.
A: Version 3. For the specific minor/patch release, please refer to the package.json.
Pull requests are welcome! Before you begin, please check existing GitHub Issues and Pull Requests to see if your idea is already in the pipeline. If not, here's a guide on how to contribute to this project. Thank you!
ddb-single-table is open-source software licensed under an MIT License.
```