Copyright | (c) 2011 MailRank Inc. |
---|---|
License | Apache |
Maintainer | Mark Hibberd <mark@hibberd.id.au>, Nathan Hunter <nhunter@janrain.com> |
Stability | experimental |
Portability | portable |
Safe Haskell | None |
Language | Haskell98 |
Low-level network connection management.
Synopsis
- connect :: Client -> IO Connection
- disconnect :: Connection -> IO ()
- defaultClient :: Client
- makeClientID :: IO ClientID
- exchange :: Exchange req resp => Connection -> req -> IO resp
- exchangeMaybe :: Exchange req resp => Connection -> req -> IO (Maybe resp)
- exchange_ :: Request req => Connection -> req -> IO ()
- pipeline :: Exchange req resp => Connection -> [req] -> IO [resp]
- pipelineMaybe :: Exchange req resp => Connection -> [req] -> IO [Maybe resp]
- pipeline_ :: Request req => Connection -> [req] -> IO ()
Connection management
disconnect :: Connection -> IO () Source #
Disconnect from a server.
Client configuration
defaultClient :: Client Source #
Default client configuration. Talks to localhost, port 8087, with a randomly chosen client ID.
makeClientID :: IO ClientID Source #
Generate a random client ID.
Requests and responses
Sending and receiving
exchange :: Exchange req resp => Connection -> req -> IO resp Source #
Send a request to the server, and receive its response.
exchangeMaybe :: Exchange req resp => Connection -> req -> IO (Maybe resp) Source #
Send a request to the server, and receive its response (which may be empty).
exchange_ :: Request req => Connection -> req -> IO () Source #
Send a request to the server, and receive its response, but do not decode it.
Pipelining many requests
pipeline :: Exchange req resp => Connection -> [req] -> IO [resp] Source #
Send a series of requests to the server, back to back, and receive a response for each request sent. The sending and receiving will be overlapped if possible, to improve concurrency and reduce latency.
pipelineMaybe :: Exchange req resp => Connection -> [req] -> IO [Maybe resp] Source #
Send a series of requests to the server, back to back, and receive a response for each request sent (the responses may be empty). The sending and receiving will be overlapped if possible, to improve concurrency and reduce latency.