Litexa APL
The Alexa Presentation Language supports curating visual experiences on compatible Alexa-enabled screen devices.
APL Directives
There are two different APL directives:
-
Render Document Directive:
Sends a required
document
with graphical layouts and components, and optionaldatasources
. -
Execute Commands Directive:
Sends one or more
commands
, which are tied to adocument
(in the same or a past response), and execute in sequence upon arrival.
This module supports easily building, sending, and validating APL directives. Please find the information on how to install and use this module (along with some general APL usage information) below.
WARNING: The Display.RenderTemplate
directive (as supported by the @litexa/render-template
extension) is not compatible with these APL directives.
The two can still both be used in the same skill, as long as they aren't sent in the same response. If they are,
the APL directive(s) will take precedence and the Display.RenderTemplate
directive will be removed!
Installation
The module can be installed globally, which makes it available to any of your Litexa projects:
npm install -g @litexa/apl
If you alternatively prefer installing the extension locally inside your Litexa project for the sake of tracking the dependency, just run the following inside of your Litexa project directory:
npm install --save @litexa/apl
This should result in the following directory structure:
project_dir
├── litexa
└── node_modules
└── @litexa
└── apl
New Statement
When installed, the module adds a new apl
statement to Litexa's syntax, which can be used to build and send APL
directives from within Litexa.
The apl
statement supports the following attributes:
-
document
... requires a Documentobject
-
data
... requires a Dataobject
-
commands
... requires either a singleobject
orarray
of Commandsobjects
-
token
... requires aString
identifier to be attached to the APL directives
There are two options for supplying the objects required by document
, data
, and commands
:
-
Use a (quoted or unquoted) path to a JSON file. This path should be relative to your skill's
litexa
directory. Doing this will assign the JSON file's contents to the indicated attribute (and throw a compile-time error, if the file can't be found). This option is meaningful for anything static (e.g. a fixeddocument
).For example, assuming the following project structure:
project_dir └── litexa └── my_doc.json └── apl └── my_data.json
The above files could be referenced like so:
apl document: my_doc.json data: apl/my_data.json
-
Use a function in external code to generate the
object
/array
and supply the output. This option is meaningful for anything dynamic (e.g.data
orcommands
which depend on certain parameters).function generateMyData(args) { // ... }
local myData = generateMyData(args) apl data: myData
More information on how to use each of these attributes is provided below.
Document
As a reminder, every APL.RenderDocument
directive is required to have a document (this means sending only data is not
possible).
Here's an example for sending a document
with minimal properties via apl
.
apl my_doc.json
# The above shorthand for specifying a document is equivalent to:
apl
document: my_doc.json
// my_doc.json
{
"type": "APL",
"version": "1.0",
"mainTemplate": {
// required components for APL to inflate on the device upon activation
}
}
TIP: apl
will automatically add default values for type
and version
, if missing.
Beyond the required mainTemplate
, a document
can optionally include import
, resources
, styles
, and layouts
.
For more information on these, please refer to the official
APL Document documentation.
TIP: If you prefer, you can specify a document
that wraps either or both of document
and data
. This is also
the format provided by any APL Authoring Tool exports, so
directly referencing any exported examples as your document
will work.
// my_doc.json:
{
"document": {
// your APL document
},
"data": { // or alias: "datasources"
// your APL data
}
}
TIP: To customize behavior per device, you can use the data-bound viewport
variable in when
conditionals,
to check device properties. Here are a couple examples:
"resources": [
{
"description": "Stock color for the light theme",
"colors": {
"colorTextPrimary": "#151920"
}
},
{
"description": "Stock color for the dark theme",
"when": "${viewport.theme == 'dark'}",
"colors": {
"colorTextPrimary": "#f0f1ef"
}
}
]
"layouts": {
"items": [
{
"when": "${viewport.shape == 'round'}",
"type": "Container",
(...)
// use this container, if running on the Echo Spot
}
{
"type": "Container",
(...)
// otherwise, use this container
}
]
}
For more information on viewport
and which characteristics of the display device it includes, please refer to the
Viewport Property
documentation.
Data
As a reminder, every APL.RenderDocument
directive can optionally include "datasources". This data is a collection of
skill-author defined objects which can then be referenced in an APL document's components.
Here's an example of sending some data
via apl
, and using it in a document
:
apl
document: my_doc.json
data: my_data.json
// my_data.json
// datasources:
{
"myDataObject": {
"type": "object",
"properties": {
"title": "This is myDataObject's title."
}
}
}
The above data
is then accessed with the parameter payload
:
// my_doc.json
// document:
{
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Text",
"text": "${payload.myDataObject.title}"
}
}
}
WARNING: The data reference "payload" is the default, but could be replaced with any String
. However, it is
important to only have a single String
in parameters
(adding anything else will break the document).
For more information on what kind of data
you can use, please refer to the
APL Data Sources and Transformers
documentation.
Commands
As a reminder, the APL.ExecuteCommands
directive is sent with a single command object
, or an array
of multiple commands. These commands are then executed in sequence.
Here's an example of using commands
via apl
, to show pages in a document
's Pager
:
apl
document: my_pager.json
commands: pager_commands.json
// my_pager.json:
{
"mainTemplate": {
"item": [
{
"type": "Pager",
"id": "pagerComponentId",
"items": [
{
"type": "Text",
"text": "Page 1" // page 1 will inflate first
},
{
"type": "Text",
"hint": "Page 2"
},
{
"type": "Text",
"hint": "Page 2"
}
]
}
]
}
}
// pager_commands.json
[
{
"type": "Idle",
"delay": 2000 // let page 1 show for 2 secs
},
{
"type": "SetPage",
"componentId": "pagerComponentId", // above Pager's ID
"value": 2 // turn to page 2
},
{
"type": "Idle",
"delay": 2000 // let page 2 show for 2 secs
},
{
"type": "SetPage",
"componentId": "pagerComponentId",
"value": 3 // turn to page 3
}
]
TIP: You can send commands
without a document
or data
. If a document
is active on the device, the commands
will execute accordingly. Otherwise, they will be ignored.
This means you can send a document
at some point in your skill, and then choose to send detached commands
in future responses.
Tokens
The apl
token
defaults to "DEFAULT_TOKEN", if not specified. It's important to note that an ExecuteCommands
directive's token must match the displaying RenderDocument
's token for the commands to run.
WARNING: As of March 2018, commands with tokens not matching the active document incorrectly do work on the ASK Developer Console. They are properly suppressed on APL-compatible devices.
Additional to insuring that commands
only run atop the intended document
, tokens can also be used to allocate
User Events, as demonstrated farther below.
Merging Fragments
The apl
statement supports aggregating multiple instances of document
, data
, commands
. What does this mean?
If your skill encounters multiple apl
statements before sending a response (e.g. apl
statements in different
states
), it will aggregate any such "fragments" before sending them in APL directives. Here's an example:
stateOne
apl doc_one.json
data: data_one.json
commands: commands_one.json
-> stateOne
stateTwo
apl doc_two.json
data: data_two.json
commands: commands_two.json
-> stateThree
stateThree
apl
document: ["doc_three.json"]
data: data_three.json
commands: commands_three.json
# The above state sequence would merge all three documents,
# data, and commands prior to sending the response.
This behavior is useful for adding state-specific content or instructions to your skill's APL behavior, or
interleaving commands
with Litexa say
or soundEffect
statements.
WARNING: Since the document
can only meaningfully have one mainTemplate
(i.e. active template), any
consecutively encountered document
fragments that have a mainTemplate
will overwrite the previous mainTemplate
(with a logged warning)!
Make sure any possible state flow with at least one apl
document
always finds a valid mainTemplate
, and wouldn't
accidentally overwrite a previous document
's required mainTemplate
.
NOTE: If a Litexa state flow encounters consecutive apl
token
s prior to sending a response, it will simply use the
latest.
Referencing Assets
Beyond using existing URLs, it is possible to reference assets
files in any apl
document
or data
.
To do so, simply add the placeholder prefix assets://
to your file's name. For example:
{
"type": "Image",
"source": "assets://my_image.jpg",
"width": 300,
"height": 300
}
Assuming there's a my_image.jpg
in your assets
directory, the above reference would then be replaced with the
S3 link of the deployed file.
Interleaving Sound
Litexa's say
and soundEffect
are usually added to the response
outputSpeech.
However, any outputSpeech
is spoken before APL commands are executed.
Using the apl
statement, if a document
is pending, say
and soundEffect
will be converted to APL commands and
interleave in the expected sequence. For example:
apl
document: apl_doc.json
data: apl_data.json
commands: apl_commands.json
say "turning page"
apl
commands: turn_page.json
soundEffect page_chime.mp3
say "page turned"
would produce the following output sequence:
- APL would execute commands in
apl_commands.json
- Alexa would say "turning page"
- APL would execute commands in
turn_page.json
- Alexa would play the sound effect
page_chime.mp3
- Alexa would say "page turned"
NOTE: The above sequencing will only take place if a document
is found before sending the response.
Reason: Converting any sound output to APL requires insertions in the document
. Creating a new document
to
accomplish this might unintentionally replace an active document
on the device.
Summary: If no document
is pending, no interleaving will take place, and any say
or soundEffect
will normally
play through outputSpeech
. In the above example, if no apl
document
were defined, the output sequence would be
2-4-5-1-3.
WARNING: As of March 2018, sound effects incorrectly do not work on the ASK Developer Console. They do however work on APL-compatible devices.
User Events
APL can trigger user events back to the skill when the user presses an on-screen TouchWrapper. Here's an example:
{
"type": "TouchWrapper",
"id": "My Touchable",
"item": {
"type": "Text",
"text": "I am a touchable that will send an event back to the skill."
},
"onPress": {
"type": "SendEvent",
"arguments": [
"I am coming from My Touchable."
]
}
}
A skill user touching the touchable would then trigger this Alexa.Presentation.APL.UserEvent
:
{
"type": "Alexa.Presentation.APL.UserEvent",
"requestId": "...",
"timestamp": "...",
"locale": "en-US",
"arguments": [
"I am coming from My Touchable."
],
"components": {},
"source": {
"type": "TouchWrapper",
"handler": "Press",
"id": "My Touchable",
"value": false
},
"token": "This is the token of the APL document that sourced this event."
}
You can optionally handle any such user events in your code with something like:
## In litexa:
when Alexa.Presentation.APL.UserEvent
handleUserEvent($request)
// In your external code (e.g. JavaScript):
function handleUserEvent(request) {
switch(request.token) {
// could ignore a token from an outdated document
}
switch(request.source.id) {
// could trigger behavior specific to this touchable
// (e.g. send command to scroll a visible list)
}
switch(request.arguments) {
// could send and evaluate something like data-bound arguments
}
}
Intent Handling Requirements
When using an APL directive, the built-in AMAZON.HelpIntent
must be handled by the skill (i.e. included in at
least one when
listener). This can be done via state-specific handlers, or a global
handler:
global
when AMAZON.HelpIntent
-> helpState
Checking APL Support
If APL is not supported on the device running your skill, any apl
statements will be ignored, and everything else
will work normally (e.g. say
and soundEffect
will run via outputSpeech
instead of APL commands).
To check APL support at runtime, the following command can be used from within Litexa or external code:
if APL.isEnabled()
# say "APL is supported on this device."
else
# say "APL is not supported on this device."
TIP: This availability check should be used to curate any skill for both APL and non-APL devices.
Relevant Resources
For more information, please refer to the official APL documentation: