De.Lz77.window (@dinosaure, #116 & #115)optint and how we handle large files with Gz. Fix an error when we encode the isize into the deflated stream. (@dinosaure, @igarnier, #121 & #120)Add a non-stream implementation (@clecat, @dinosaure, #102 & #92)
The non-blocking stream API has a cost to maintain a state across syscall such as read and write. It's useful when we want to plug decompress behind something like a socket and care about memory-consumption but it has a big cost when we want to compress/decompress an object saved into one and unique buffer.
The non-stream API gives an opportunity to inflate/deflate one and unique buffer without the usual plumbing required by the non-blocking stream API. However, we are limited to compute only objects which can fit into a bigarray.
About performance, the non-stream API is better than the non-blocking stream API. See the PR for more details about performances. On the book2 (from the Calgary corpus) file:
decompress (stream): 15 Mb/s (deflation), 76 Mb/s (inflation), ratio: 42.46 %decompress (non-stream): 17 Mb/s (deflation), 105 Mb/s (inflation), ratio: 34.66 %Even if we checked the implementation with our tests (we ran ocaml-git and irmin with this path), the implementation is young and we probably miss some details/bugs. So we advise the user to compare, at least, the non-stream implementation with the non-blocking stream implementation if something is wrong.
d.hold (@clecat, #99)Higher.compress arguments (@vect0r-vicall, #103)Improve Lz77 algorithms (@dinosaure, #108) breaking changes the deflation expects a new window: De.Lz77.make_window instead of De.make_window (which is twice larger to improve the compression algorithm)
Depending on the level and your corpus, we did not observe performance regression on deflation (and #97 improves a lot performances). An higher level is slower (but the compression ratio is better). We advise, by default, to use the level 6.
Note that the user is able to make its own compression algorithm according to his corpus. An example of such implementation is available on the new decompress.lz libraries which fills a queue and compress the input.
breaking changes decompress expects a level between 0 and 9 (inclusive) (instead of 0 and 3).
ctypes reverse binding (@dinosaure, #98)decompress.pipe which can compress/uncompress with deflate, zlib or gzip format.Zl API (@dinosaure, #85)dune as a dependency of rfc1951 (@kit-ty-kate)Zl (@dinosaure, #84)Higher returns a result value instead to raise an exception (@dinosaure, @copy, #80)decodefmt depedencyZl.Higher** breaking changes **
decompress.1.0.0 is 3 times faster about decompression than before. A huge amount of work was done to improve performance and coverage.
The main reason to update the API is to fix a bad design decision regarding split compression and encoding. User is able to implement a new compression algorithm and use it.
Release comes with regressions:
decompress only supports Bigarray now, not BytesOf course, v1.0.0 comes with fixes and improvements:
decompress is able to compress/uncompress Calgary corpuszlibdecompress.v0.9.0 and 3 times slower than zlibdecompress is split into 2 main modules:
dd which implements RFC1951zz which implements ZLIBAPI of them are pretty-close to what decompress.v0.9.0 does with some advantages on dd:
Dd.LddAs a response to #25, dd provides a higher level API resembling camlzip.
mmap (@XVilka, @dinosaure, @hannesm, #68, #69, #71)decompress (@dinosaure, #63)Bytes.t and Bigstring.t as I/O buffer (@dinosaure)breaking change New interface of decompress
We wrap API in Zlib_{inflate/deflate} and add RFC1951_{inflate/deflate}.
jbuilder/dune (task from @samoht)zlib headerFixed infinite loop (task fron @cfcs)
See 2e3af68, decompress has an infinite loop when the inflated dictionary does not provide any bindings (and length of opcode is <= 0). In this case, decompress expects an empty input and provide an empty output in any case.
sync_flush, partial_flush, full_flush (experimental)topkg