hworker: A reliable at-least-once job queue built on top of redis.

[ library, unclassified ] [ Propose Tags ] [ Report a vulnerability ]

See README.


[Skip to Readme]

Modules

[Index]

Downloads

Note: This package has metadata revisions in the cabal description newer than included in the tarball. To unpack the package including the revisions, use 'cabal get'.

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

Versions [RSS] 0.1.0.0, 0.1.0.1
Change log CHANGELOG.md
Dependencies aeson, attoparsec, base (>=4.8 && <5), bytestring, hedis (>=0.6.5), text, time (>=1.5), uuid (>=1.2.6 && <1.4) [details]
License ISC
Author Daniel Patterson
Maintainer dbp@dbpmail.net
Revised Revision 1 made by HerbertValerioRiedel at 2017-01-01T18:52:57Z
Home page http://github.com/dbp/hworker
Uploaded by DanielPatterson at 2015-11-02T00:46:43Z
Distributions LTSHaskell:0.1.0.1
Reverse Dependencies 1 direct, 1 indirect [details]
Downloads 2124 total (12 in the last 30 days)
Rating 2.0 (votes: 1) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2016-11-30 [all 3 reports]

Readme for hworker-0.1.0.1

[back to package description]

About

hworker is a Redis-backed persistent at-least-once queue library. It is vaguely inspired by sidekiq for Ruby. It is intended to be a simple reliable mechanism for processing background tasks. The jobs can be created by a Haskell application or any application that can push JSON data structures of the right shape into a Redis queue. The application that processes the jobs need not be the same one as the application that creates them (they just need to be able to talk to the same Redis server, and use the same serialization to/from JSON).

Stability

This has been running in one application sending email (using hworker-ses) for several months. This is relatively low traffic (transactional messages) most of the time, with spikes of 10k-30k messages (mailing blasts).

Important Note

The expiration of jobs is really important. It defaults to 120 seconds, which may be short depending on your application (for things like sending emails, it may be fine). The reason why this timeout is important is that if a job ever runs longer than this, the monitor will think that the job failed in some inexplicable way (like the server running the job died) and will add the job back to the queue to be run. Based on the semantics of this job processor, jobs running multiple times is not a failure case, but it's obviously not something you want to happen, so be sure to set the timeout to something reasonable for your application.

Overview

To define jobs, you define a serialized representation of the job, and a function that runs the job, which returns a status. The behavior of uncaught exceptions is defined when you create the worker - it can be either Failure or Retry. Jobs that return Failure are removed from the queue, whereas jobs that return Retry are added again. The only difference between a Success and a Failure is that a Failure returns a message that is logged (ie, neither run again).

Example

See the example directory in the repository.

Semantics

This behavior of this queue processor is at-least-once.

We rely on the defined behavior of Redis for reliability. Once a job has been queued, it is guaranteed to be run eventually, provided some worker and monitor threads exist. If the worker thread that was running a given job dies, the job will eventually be retried (if you do not want this behavior, do not start any monitor threads). Once the job completes, provided nothing kills the worker thread in the intervening time, jobs that returned Success will not be run again, jobs that return Failure will have their messages logged and will not be run again, and jobs that return Retry will be queued again. If something kills the worker thread before these acknowledgements go through, the job will be retried. Exceptions triggered within the job cannot affect the worker thread - what they do to the job is defined at startup (they can cause either a Failure or Retry).

Any deviations from this behavior are considered bugs that will be fixed.

Redis Operations

Under the hood, we will have the following data structures in redis (name is set when you create the hworker instance):

hworker-jobs-name: list of json serialized job descriptions

hworker-progress-name: a hash of jobs that are in progress, mapping to time started

hworker-broken-name: a hash of jobs to time that couldn't be deserialized; most likely means you changed the serialization format with jobs still in queue, or you pointed different applications at the same queues.

hworker-failed-queue: a record of the jobs that failed (limited in size based on config).

In the following pseudo-code, I'm using MULTI...EXEC to indicate atomic blocks of code. These are actually implemented with lua and EVAL, but I think it's easier to read this way. If you want to see what's actually happening, just read the code - it's not very long!

When a worker wants to do work, the following happens:

now = TIME
MULTI
v = RPOP hworker-jobs-name
if v
  HSET hworker-progress-name v now
EXEC
v

When it completes the job, it does the following:

v = JOB
HDEL hwork-progress v

If the job returned Retry, the following occurs:

v = JOB
t = START_TIME
MULTI
LPUSH hwork-jobs v
HDEL hwork-progress t
EXEC

A monitor runs on another thread that will re-run jobs that stay in progress for too long (as that indicates that something unknown went wrong). The operation that it runs periodically is:

keys = HKEYS (or HSCAN) hwork-progress
for keys as v:
  started = HGET hwork-progress v
  if started < TIME - timeout
    MULTI
    RPUSH hwork-jobs v
    HDEL hwork-progress v
    EXEC

Note that what the monitor does and Retry is slightly different - the monitor puts jobs on the front of the queue, whereas Retry puts them on the back.

Primary Libraries Used

  • hedis
  • aeson

Contributors

Build Status

Circle CI