Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: minor updates

...

tmpfs for sequences and compositions 

This seems to be a long standing demand in the community as well:

Problem

Serverless application decomposition encourages the split of tasks into more granular functions. This is the source of 2 categories of problems: 

Problem 1 - Transferring large assets between actions

It's hard to pipe data between actions, when the its size of the response is above over the response size limits. As a result, each action in a sequence must read the asset independently, and upload any new content to a temporary storage, for the next action in the sequence to read it. This creates 2 main problems: 

...

system limits.

Problem 2 - Multiple actions processing the same asset

In case multiple actions are processing the same asset, each action needs to independently re-download the same asset, resulting in unnecessary network traffic.

Workarounds

Send Asset by Reference

Developers can use a 3rd party blob storage such as AWS S3 to upload the asset, and pass it to the next action using the URL to of the asset, as "the reference" to it.

The problem with this workaround is that the size of the asset influences the performance. The . Having each action uploading and downloading to/from a blob storage for each activation increases the execution time. The bigger the asset, the bigger the impact. If the asset is 2GB, then uploading and downloading to a blob storage on each activation may add up to a minute to the execution time.

Developer experience is also impacted by the fact that developers must manage the integration with a 3rd party blob storage. This implies that:

  • developers are responsible to perform regular credential rotation for the blob storage, 
  • developers must manage multi-region buckets, corresponding to the regions the actions may be invoked in, to optimize the latencies
Combine multiple actions in one action

Developers can combine the operations that act on the same asset into a single action. This workaround makes it harder to reuse actions in sequences or compositions.

It also restricts developers from using a polyglot implementation with action written in multiple languages; for instance the AI actions could be written in python, and combined with JS actions that may download the asset at the beginning of the sequence, and upload it at the end, by the last action in the composition.

Compress Asset and include it in the payload

If the asset format can be further compressed, it could be sent as Base64 into the JSON payload for the next action. 

This workaround it still constrained by configurable system limits, and it makes the action code more complex than it needs to be.

Measuring the impact

The problem described above results in:

  • Poor developer experience
  • Poor execution performance

In order to asses the impact we need a methodology to quantify how big this problem is: 

Proposal to measure the impact

Measuring execution performance 

Questions:

  1. How much time is spent in moving the content around vs processing it ?
  2. What's the impact of using a Blob Storage when compared to an NFS/EFS-like systems ?
Measuring developer impact
  1. Maintenance overhead 
  2. Implementation time 
  3. Additional code complexity 

Proposal

The proposal is to transparently provide developers with a temporary way to store large assets in the OW cluster. This is probably the hardest problem to solve when compared to the other ones b/c it involves persistence, state, and possibly handling large amounts of data. Bellow are listed a few possible options:

Namespace volumes

(credits: Cosmin Stanciu for suggesting this simpler solution)

Namespaces can be provided with their own volume, one volume per namespace, limited in size ( i.e 10GB). The programming model should handle this volume as short-lived, hence its limited size as well. Its purpose is to make it efficient for sequences and compositions to access assets faster than if they were downloaded from an eventually consistent blob storage. The volume is independent of activations lifecycle. It's up to the developers to manage it, including cleaning it up.

OpenWhisk's implementation could configure a network file system like NFS, (i.e. AWS EFS), GlusterFS, or others. This volume is mounted on each Invoker host. Each namespace has its own subfolder on that volume, and it can be mounted appropriately into a known path in the action containers that want to make use of it. To simplify the stem-cells management, each action requiring access to such volume may ignore stem-cells; instead, they may go through a cold-start, as if they're blackbox containers.

The diagram bellow shows 2 Invoker Hosts that mount a network file system on /mnt/actions on the host. Inside the action containers the /tmp the folder is mounted from the host folder corresponding to the namespace: /mnt/actions/<NAMESPACE>

draw.io Diagram
bordertrue
viewerToolbartrue
fitWindowfalse
diagramNameOpenWhisk-namespace volume
simpleViewerfalse
width
diagramWidth741
revision1

Action volumes

Allow developers to "attach" a persistent disk to an action. The programming model in this case assumes there's always a folder available on the disk on a well known path defined by the developer. OpenWhisk's implementation could leverage solutions particular to a deployment:

...