# groupBy: Replacement definition of Data.List.GroupBy

[ data, library, mit ] [ Propose Tags ]

## Modules

[Index]

#### Maintainer's Corner

For package maintainers and hackage trustees

Candidates

• No Candidates
Versions [RSS] 0.1.0.0 base (>=4 && <5) [details] MIT 2018 Donnacha Oisín Kidney Donnacha Oisín Kidney mail@doisinkidney.com Data https://github.com/oisdk/groupBy#readme https://github.com/oisdk/groupBy/issues head: git clone https://github.com/oisdk/groupBy by oisdk at 2018-01-30T00:14:45Z NixOS:0.1.0.0 1023 total (5 in the last 30 days) (no votes yet) [estimated by Bayesian average] λ λ λ Docs available Last success reported on 2018-01-30

[back to package description]

# groupBy

This provides a drop-in replacement for Data.List.groupBy, with benchmarks and tests.

The original Data.List.groupBy has (perhaps unexpected) behaviour, in that it compares elements to the first in the group, not adjacent ones. In other words, if you wanted to group into ascending sequences:

>>> Data.List.groupBy (<=) [1,2,2,3,1,2,0,4,5,2]
[[1,2,2,3,1,2],[0,4,5,2]]


The replacement has three distinct advantages:

1. It groups adjacent elements, allowing the example above to function as expected:

>>> Data.List.GroupBy.groupBy (<=) [1,2,2,3,1,2,0,4,5,2]
[[1,2,2,3],[1,2],[0,4,5],[2]]

2. It is a good producer and consumer, with rules similar to those for Data.List.scanl. The old version was defined in terms of span:

groupBy                 :: (a -> a -> Bool) -> [a] -> [[a]]
groupBy _  []           =  []
groupBy eq (x:xs)       =  (x:ys) : groupBy eq zs
where (ys,zs) = span (eq x) xs


Which prevents it from being a good producer/consumer.

3. It is significantly faster than the original in most cases.

## Tests

Tests ensure that the function is the same as the original when the relation supplied is an equivalence, and that it performs the expected adjacent comparisons when the relation isn't transitive.

The tests also check that laziness is maintained, as defined by:

>>> head (groupBy (==) (1:2:undefined))
[1]

1

2


## Benchmarks

Benchmarks compare the function to three other implementations: the current Data.List.groupBy, a version provided by the utility-ht package, and a version provided by Brandon Simmons.

The benchmarks test functions that force the outer list:

length . groupBy eq


And functions which force the contents of the inner lists:

sum' = foldl' (+) 0

sum' . map sum' . groupBy eq


Each benchmark is run on lists where the groups are small, the groups are large, and where there is only one group. The default size is 10000, but other sizes can be provided with the --size=[x,y,z] flag to the benchmarks.

The new definition is slower than the old only when the size of the sublists is much larger than the size of the outer list. To make the newer definition faster in that case, you would simply force the pair (or use a strict pair) from the accumulator. However, this makes the new definition match the old speed in the other cases, which I would imagine are more common.