k8s-wrapper-0.1.0.0: Application wrapper for the k8s environment
Safe HaskellSafe-Inferred
LanguageHaskell2010

Network.K8s.Application

Contents

Description

k8s-wrapper

The k8s Wrapper is a module designed to provide an interface for running applications in the Kubernetes system. This wrapper spawns the server on an internal protocol, providing endpoints for startup, liveness, and readiness checks, as well as optional metrics support.

There are some restrictions to be aware of when using this module. First, the **server must be running in the main thread** in order to exit properly. If this guarantee is not met, application and k8s will still function, but rollout update procedures may take much longer.

Second, the user's application must be able to tear down upon receiving AsyncCancelled or ThreadKilled signals. While it's acceptable to implement graceful teardowns, these should be time-bound. In general, applications that use Warp handles this automatically.

To use the k8s Wrapper, include the following configuration snippet in your pod:

pod.yaml

Expand
...
spec:
  metadata:
    annotations:
      prometheus.io/port: "9121"
      prometheus.io/scrape: "true"
      prometheus.io/path: "/_metrics"
  containers:
    - lifecycle:
        preStop:
          httpGet:
           path: /stop
           port: ${config.port}
          # Period when after which the pod will be terminated
          # even if the stop hook has not returned.
          terminationGracePeriodSeconds: 30
       # When the service is considered started
       # if the startup probe will not return success in
       #  `initialDealySeconds + periodSeconds * failureThreshold` seconds
       # the service will be restarted
       startupProbe:
         httpGet:
           path: /ready
           port: ${config.port}
         failureThreshold: 12
         initialDelaySeconds: 1
         periodSeconds: 5
         successThreshold: 1
         timeoutSeconds: 2 
      # When the service is considered alive, if it's not alive it will be
      # restarted according to it's policy 
      # initialDealySeconds + periodSeconds * failureThreshold
      livenessProbe:
         httpGet:
           path: /health
           port: ${config.health}
         failureThreshold: 2
         initialDelaySeconds: 1
         periodSeconds: 10
         successThreshold: 1
         timeoutSeconds: 2
       readinessProbe:
         httpGet:
           path: /ready
           port:${config.port}
         failureThreshold: 2
         initialDelaySeconds: 1
         periodSeconds: 10
         successThreshold: 1
         timeoutSeconds: 2
Synopsis

Documentation

withK8sEndpoint Source #

Arguments

:: Config

Static configuration of the endpoint.

-> K8sChecks

K8s hooks

-> IO a

Initialization procedure

-> (a -> IO b)

User supplied logic, see requirements for the server to work properly.

-> IO () 

Wrap a server that allows controlling business logic server. The server can be communicated by the k8s services and can monitor the application livecycle.

In the application run the following logic:

   k8s_wrapper_server                  user code
                                  +----------------+
       ready=false                | initialization |             
       started=false              |                |
       alive=false                +----------------+
                                         |
       started=true  <-------------------+
                                         |
                                  +---------------+
                                  | start server  |
                                  +---------------+
                                         |
      ready? ---> check user thread, run callback
      alive? ---> check user thread, run callback

The server wrapper also provides additional logic: 1. When checking liveness, the code checks if the thread running the server is still alive and returns False if it is not, regardless of what liveness check returns. 2. When the stop action is called, the server starts to return False in the readiness check. Once it is asked by the server, it sends an Exception to the client code. This ensures that no new requests will be sent to the server.

In case of an asynchronous exception, we expect that we want to terminate the program. Thus, we want to ensure a similar set of actions as a call to the /stop hook:

  1. Put the application in the tearing down state.
  2. Start replying with ready=False replies.
  3. Once we replied with ready=false at least once (or after a timeout), we trigger server stop. In this place, we expect the server to stop accepting new connections and exit once all current connections will be processed. This is the responsibility of the function provided by the user, but servers like Warp already handle that properly.

In case of an exception in the initialization function, it will be rethrown, and the function will exit with the same exception. The k8s endpoint will be torn down.

In case if the user code exists with an exception, it will be rethrown. Otherwise, the code exits properly.

data Config Source #

Server configuration.

Constructors

Config 

Fields

Instances

Instances details
Show Config Source # 
Instance details

Defined in Network.K8s.Application

defConfig :: Config Source #

Default configuration variables.

Checks

There are two types of health checks that can be used:

  1. Deep check - this verifies not only basic startup but also that the services the application communicates with are up and running. It is more precise, but it increases the risk of marking services as unavailable and causing cascading errors. Therefore, it is recommended to use deep checks for the startupCheck, but use shallow checks for the livenessCheck. As for the readinessCheck, it is up to the user to decide which type of check to use.
  2. Shallow check - this provides only the basic tests to ensure the application is running.

The suggested approach for implementing the health checks is as follows:

  1. The startupCheck should return True after the server is started, configs were checked, and communication with other services was established. Once it has returned True, the function should always return True.
  2. The readinessCheck should return True after the startup check has passed and after population of the caches, preparing of the internal structures, etc. The function may switch back to False if the structures need repopulation and the application can't serve the users. It's important to ensure that all the services in the cluster will not switch to not ready state at the same time to avoid cascading failures.
  3. The livenessCheck performs a shallow check of the service and returns the state accordingly. Generally, the livenessCheck should return False only in the case where the server needs to be restarted.

data K8sChecks Source #

Callbacks that the wrapper can use in order to understand the state of the application and react accordingly.

Constructors

K8sChecks 

Fields