amazonka-s3-streaming-0.2.0.5: Provides conduits to upload data to S3 using the Multipart API

Safe HaskellNone
LanguageHaskell2010

Network.AWS.S3.StreamingUpload

Synopsis

Documentation

streamUpload Source #

Arguments

:: (MonadResource m, AWSConstraint r m, MonadAWS m) 
=> Maybe ChunkSize

Optional chunk size

-> CreateMultipartUpload

Upload location

-> Sink ByteString m CompleteMultipartUploadResponse 

Given a CreateMultipartUpload, creates a Sink which will sequentially upload the data streamed in in chunks of at least chunkSize and return the CompleteMultipartUploadResponse.

If uploading of any parts fails, an attempt is made to abort the Multipart upload, but it is likely that if an upload fails this abort may also fail. ListMultipartUploads can be used to list any pending uploads - it is important to abort multipart uploads because you will be charged for storage of the parts until it is completed or aborted. See the AWS documentation for more details.

May throw Error

data UploadLocation Source #

Specifies whether to upload a file or 'ByteString

Constructors

FP FilePath

A file to be uploaded

BS ByteString

A strict ByteString

concurrentUpload Source #

Arguments

:: (AWSConstraint r m, MonadAWS m, MonadBaseControl IO m) 
=> Maybe ChunkSize

Optional chunk size

-> Maybe NumThreads

Optional number of threads to upload with

-> UploadLocation

Whether to upload a file on disk or a ByteString that's already in memory.

-> CreateMultipartUpload

Description of where to upload.

-> m CompleteMultipartUploadResponse 

Allows a file or ByteString to be uploaded concurrently, using the async library. The chunk size may optionally be specified, but will be at least minimumChunkSize, and may be made larger than if the ByteString or file is larger enough to cause more than 10,000 chunks.

Files are mmapped into chunkSize chunks and each chunk is uploaded in parallel. This considerably reduces the memory necessary compared to reading the contents into memory as a strict ByteString. The usual caveats about mmaped files apply: if the file is modified during this operation, the data may become corrupt.

May throw Error, or IOError; an attempt is made to cancel the multipart upload on any error, but this may also fail if, for example, the network connection has been broken. See abortAllUploads for a crude cleanup method.

abortAllUploads :: MonadAWS m => BucketName -> m () Source #

Aborts all uploads in a given bucket - useful for cleaning up.

minimumChunkSize :: ChunkSize Source #

Minimum size of data which will be sent in a single part, currently 6MB