wai-handler-hal
This library lets you run wai
Application
s on AWS Lambda, which
means you can now use mature web frameworks like
servant
.
The main entry point is Network.Wai.Handler.Hal.run
, which wraps an
Application
and returns a function that can be passed to hal
's
AWS.Lambda.Runtime.mRuntime
.
NOTE: The function returned by Network.Wai.Handler.Hal.run
is
only for Lambda Proxy
Integrations
of AWS API Gateway REST APIs (AWS::ApiGateway::RestApi
in
CloudFormation). If you try to use such a Lambda with an API Gateway
HTTP API (AWS::ApiGatewayV2::Api
in CloudFormation), it will
return 500s.
For code examples, see this repository,
which contains:
Developing
Instead of serving your application with something like
warp
, you will set up an
executable that serves your application with
hal
. The necessary glue
code looks something like this:
import AWS.Lambda.Runtime (mRuntime)
import Network.Wai (Application)
import qualified Network.Wai.Handler.Hal as WaiHandler
app :: Application
app = undefined -- From Servant or wherever else
main :: IO ()
main = mRuntime $ WaiHandler.run app
Local testing
Compiled Lambda functions can be awkward to test, especially if you're
relying on an API Gateway integration to translate HTTP
requests. Consider defining a second executable target that serves
your Application
using warp
, which you can use for local
testing.
Caveats
-
The Lambda is never told the port that the API Gateway is listening
on. Most APIs will listen on 443
(HTTPS), so that's what the
library reports by default. See runWithContext
if you need to
change this.
-
The Lambda is never told the HTTP version that the client uses when
talking with the API Gateway. We assume HTTP 1.1.
Packaging
Lambda functions are packaged in one of two ways: as .zip
files or
as Docker container images. The wai-handler-hal-cdk
example uses
.zip
files for simplicity. CDK doesn't give us an easy way to build
container images using nix build
, so a container-based deployment
using CDK would need to push to ECR first and then reference the image
by name.
.zip
files
You will need to create a .zip
file containing (at
least)
an executable named bootstrap
. This executable needs to run in
Amazon's runtime environment, and there are multiple ways to ensure
this.
Static linking
IOHK's haskell.nix
can build static Haskell binaries by cross-compiling against musl
libc. This is convenient, but consider copyleft implications (of
libgmp
; wai-handler-hal
is BSD-3-Clause) if you are distributing
the binaries to other people.
Dynamic linking
The other option is to compile the executable in an environment with
packages that match the Lambda runtime
environment. Some
ideas:
- Build the executable in an "imitation environment" like the
lambci/lambda
Docker container; or
- (Untested) build the executable on an Amazon Linux 2 EC2 instance.
Docker container images
It is possible to package Lambdas into Docker containers for
deployment. Amazon provide base containers for custom runtimes in the
repository
amazon/aws-lambda-provided
. One
nice feature of these images is that they provide a runtime interface
emulator,
which they fall back to when not running "for real". This makes it
possible to directly invoke the lambda and see how it behaves. (This
is less important here because we can serve our application off
warp
, but for writing Lambdas not invoked by an API Gateway Proxy
Integration, it's handy.)
At the time of writing (2021-04-06), the
amazon/aws-lambda-provided:al2
(Amazon Linux 2 tag) is the newest
base image that Amazon recommends for custom runtimes. You can use a
Dockerfile
to build your images; there are also commented .nix files
in the example repository,
showing a couple of different approaches to building container images using Nix.
Integrating
Actually calling the Lambda is done with a Lambda proxy
integration
from an API Gateway REST
API.
The simplest possible integration sends every request to the Lambda
(where the wai
Application
can return 404 or whatever if the
endpoint doesn't match). This is done by mapping the paths /
and
/{proxy+}
for the HTTP method ANY
. We do this in our CDK example,
and CDK provides a Construct which encapsulates this pattern, making
it extremely simple to deploy.
Splitting Servant applications
While it's simplest to back the entire API with a single Lambda, it
does mean that you risk one Lambda carrying an ever-expanding set of
IAM permissions. Servant's combinators make it easy to break the API
into parts that are served by individual Lambdas, which can each have
the minimal set of permissions attached to their role. If your Servant
service had endpoints foo
, bar
, and baz
, you'd serve them all
from Warp with an expression like Warp.runEnv 8080 . Servant.serve $ foo :<|> bar :<|> baz
. But Servant.serve foo
, Servant.serve bar
,
and Servant.serve baz
are also all WAI Application
s, and can each
be bundled into distinct Lambda functions using wai-handler-hal
.
Other caveats
- The stage name in a path segment is not passed to the Lambda, so it
is not passed to the
wai
Application
. An invoke URL like
https://abcde12345.execute-api.us-east-1.amazonaws.com/prod/hoot
will send pathInfo = ["hoot"]
to the Application
.
The formatters used in this repo are provided by shell.nix
:
Regenerate CI
This repo uses haskell-ci
, which is provided by flake.nix
:
haskell-ci regenerate