
On default strategy, the timing constrain may fail due to congestion Publication Place strategy: Congestion_SpreadLogic_medium Please choose the the following place and route strategy: Parameters of implementation on Vivado:Ĭurrently, this decompressor pass the test building on ADM-9V3 FPGA card (FPGA: XCVU3P-2 - FFVC1517) in a clock speed of 250MHz. And it will cause almost no change on the compression ratio, while greatly reduce the data dependency and make the parallel decompression more efficient. In this version, the compression algerithm is slightly changed, but the compression result is still in standard Snappy format. In this case, it is recommended to use a modified compression software:
#DAA DECOMPRESSOR SOFTWARE#
If you use the decompression software from Google, the perfromance of this decompression maybe bad for some special data with extremly high data dependency. (if you want to use the decompressor on other platform, only files in user_ip and source are needed) Recommended compression software Sw: software to test the decompressor on IBM CAPI platform Interface: VHDL file to connect the decompressor to IBM CAPI platform and run a demo Source: Verilog files for the decompressor Ip: IP files for the decompressor (tcl files)

The demo will work based on this platform: fetch data from memory, do decompression and send decompression result back Generating IPsĬurrently, this project utilizes some IP cores which is generated by the tcl file (create_action_ip.tcl) Directory and file Working platformĬurrently, the decompressor is used on IBM CAPI 2.0 with SNAP interface. (4) After "done" signal return, a new decompression can be processed and start again from Step (1). (1) Set metadata (compression_length and decompression_length) Input out_data_ready //Whether or not the component following the decompressor is ready to receive data.Ī communication protocol should follow a few step. Output out_data_byte_valid, //Which bytes of the output is valid Output out_data_valid, //Whether or not the data on the out_data port is valid Output in_metadata_ready, //Whether or not the decompressor is ready to receive data on its compression_length and decompression_length ports. Input in_metadata_valid, //Whether or not the data on the compression_length and decompression_length ports is valid. Input decompression_length, //length of the data after decompression (uncompressed data) Input compression_length, //length of the data before decompression (compressed data) Output in_data_ready, //Whether or not the decompressor is ready to receive data on its in_data port Input in_data_valid, //Whether or not the data on the in_data port is valid. The user should set it to 1 for starting the decompressor, and need to set it back to 0 after 1 cycle Input start, // Start the decompressor after the compression_length and decompression_length is set Output done, // Whether the decompression is done Since: 1.Output last, // Whether the data is the last one in a burst maxLiteralLength Maximal length of a literal block. maxOffset Maximal offset of a back-reference. maxBackReferenceLength Maximal length of a back-reference found. Hard-coded inside of this implementation but bigger lengths can beĬonfigured. minBackReferenceLength Minimal length of a back-reference found. The compressor maintains aīuffer of twice of windowSize - real world values are Window, must be a power of two - this determines the maximum

Several parameters influence the outcome of the "compression": windowSize the size of the sliding The #finish method must be used once all data has been fed In order to ensure the callback receives all information,

Represent either literal blocks, back-references or end of data The API consists of a compressor that is fed bytesĪnd emits LZ77Compressor.Blocks to a registered callback where the blocks LZ77 has become the synonym for a whole family of algorithms. Talk about it :-), LZSS would likely be closer to the truth but LZ77 is used vaguely here (as well as many other places that Strongly inspired by InfoZIP's implementation. The three-byte hashįunction used in this class is the same as the one used by zlib and It follows the algorithmĮxplained in section 4 of RFC 1951 (DEFLATE) and currently doesn't This class attempts to extract the core logic - findingīack-references - so it can be re-used. Of how those blocks and back-references are encoded are quiteĭifferent between the algorithms and some algorithms performĪdditional steps (Huffman encoding in the case of DEFLATE for Offset bytes before the current position. (pairs of offsets and lengths) that state "add lengthīytes that are the same as those already written starting Uncompressed data (called literal blocks) and back-references Most LZ77 derived algorithms split input data into blocks of Helper class for compression algorithms that use the ideas of LZ77.
