Skip to content
This repository was archived by the owner on Aug 6, 2025. It is now read-only.

Releases: facebookresearch/NeuralCompression

NeuralCompression 0.3.1 Release, documentation fix

03 Oct 14:25
d4424b3

Choose a tag to compare

  • Fixes a typo for using pretrained model weights (PR #224)
  • Fixes the scipy requirement to prevent errors in FID calculation (PR #226)

NeuralCompression 0.3.0 Release, generative compression refactor

25 Aug 15:40
aa08cc8

Choose a tag to compare

This release is paired with the MS-ILLM open-source implementation and includes utilities for generative compression, focusing on autoencoders with adversarial training.

List of updates:

  • Removed models redundant with CompressAI and added HiFiC autoencoder model (PR #196)
  • Metrics refactor to new metrics module, adding HiFiC FID/256 implementation, SwAV-based FID, and DISTS metric (PR #197)
  • Datasets implementation for OpenImages, DIV2K, plus testing improvements (PR #198)
  • Loss functions for non-saturating GAN, OASIS, MSE, and MSE-LPIPS (PR #199)
  • Creation of model zoo and MS-ILLM model weights open-sourcing (PR #211)
  • MS-ILLM training framework (PRs #200, #210, #216, #220)
  • Repository build updates (PRs #208, #207, #214, #221)
  • Bug fix for DVC decompress function (PR #202)
  • Bug fix for optical flow color (PR #203)
  • Apple Silicon support tail estimation (PR #205)

Contributors: @N1ghtstalker2022, @txya900619, @NripeshN @mmuckley

NeuralCompression 0.2.2 Release, reference version of package prior to refactor

14 Jun 14:00
5dda2d5

Choose a tag to compare

As documented in #188, we are going to work on deduplicating functionality in NeuralCompression from CompressAI. CompressAI is the standard package for neural compression research in PyTorch, and in general we should focus on maintaining only functionality that CompressAI doesn't already have. This release is intended to be a reference release so that any users wishing to continue using the current functionality of NeuralCompression can continue to do so by pinning the version to 0.2.2.

Changes since 0.2.1:

  • Patch for rate-distortion calculations (PR #177 and PR #178)
  • Removal of C++ extensions (PR #180)
  • Deterministic CLIC dataloader ordering (PR #182)
  • Removal of JAX-based entropy coders from main package (PR #183)
  • Inference-time image padding (PR #185)
  • Fix for numpy dtypes (PR #187)

Credits: @KarenUllrich @desi-ivanova @juliusberner @0x00b1

NeuralCompression 0.2.1 Release, fixes for build system

12 Jan 14:54
3007a37

Choose a tag to compare

This release covers a few small fixes from PRs #171 and #172.

Dependencies

  • To retrieve versioning information, we now use importlib. This is included only with Python >= 3.8, so NeuralCompression will now only run on versions of Python at least as recent as 3.8. (#171).
  • Install requirements are flexible, whereas dev requirements are fixed (#171). This should improve CI stability while allowing researchers flexibility in tuning their research environment while using NeuralCompression.
  • torch has been removed as a build dependency (#172).
  • Other build dependencies have been modified to be flexible (#172).

Build System

  • C++ code from _pmf_to_quantized_cdf introduced compilation requirements when running setup.py. Since we didn't configure our build system to handle specific operating systems, this caused a failed release upload to PyPI. The build system has been altered to use torch.utils.cpp_extension.load, which defers compilation to the the user after package installation. We would like to improve this further at some point, but the modifications from #171 gets the package stable. Note: there is a reasonable chance this could fail on non-Linux OS's such as Windows. Those users will still be able to use other package features that don't rely on _pmf_to_quantized_cdf.

Other

  • Fixed a linting issue where isort was not checking in CI if packages were properly sorted. (#171).
  • Fixed a random test issue (#171).

v0.2.0

13 Dec 18:01
cc2c1b4

Choose a tag to compare

NeuralCompression is a PyTorch-based Python package intended to simplify neural network-based compression research. It is similar to (and shares some of the functionality) of fantastic libraries like TensorFlow Compression and Compress AI.

The major theme of v0.2.0 release is autoencoders, particularly features useful for implementing existing models by Ballé and features useful to expand on these models in forthcoming research. In addition, 0.2.0 sees some code organization changes and published documentation. I recommend reading the new “Image Compression” example to see some of these changes.

API Additions

Data (neuralcompression.data)

Distributions (neuralcompression.distributions)

  • NoisyNormal: normal distribution with additive identically distributed (i.i.d.) uniform noise.
  • UniformNoise: adapts a continuous distribution via additive identically distributed (i.i.d.) uniform noise.

Functional (neuralcompression.functional)

  • estimate_tails: estimates approximate tail quantiles.
  • log_cdf: logarithm of the distribution’s cumulative distribution function (CDF).
  • log_expm1: logarithm of e^{x} - 1.
  • log_ndtr: logarithm of the normal cumulative distribution function (CDF).
  • log_survival_function: logarithm of x for a distribution’s survival function.
  • lower_bound: torch.maximum with a gradient for x < bound.
  • lower_tail: approximates lower tail quantile for range coding.
  • ndtr: the normal cumulative distribution function (CDF).
  • pmf_to_quantized_cdf: transforms a probability mass function (PMF) into a quantized cumulative distribution function (CDF) for entropy coding.
  • quantization_offset: computes a distribution-dependent quantization offset.
  • soft_round_conditional_mean: conditional mean of x given noisy soft rounded values.
  • soft_round_inverse: inverse of soft_round.
  • soft_round: differentiable approximation of torch.round.
  • survival_function: survival function of x. Generally defined as 1 - distribution.cdf(x).
  • upper_tail: approximates upper tail quantile for range coding.

Layers (neuralcompression.layers)

  • AnalysisTransformation2D: applies the 2D analysis transformation over an input signal.
  • ContinuousEntropy: base class for continuous entropy layers.
  • GeneralizedDivisiveNormalization: applies generalized divisive normalization for each channel across a batch of data.
  • HyperAnalysisTransformation2D: applies the 2D hyper analysis transformation over an input signal.
  • HyperSynthesisTransformation2D: applies the 2D hyper synthesis transformation over an input signal.
  • NonNegativeParameterization: the parameter is subjected to an invertible transformation that slows down the learning rate for small values.
  • RateMSEDistortionLoss: rate-distortion loss.
  • SynthesisTransformation2D: applies the 2D synthesis transformation over an input signal.

Models (neuralcompression.models)

End-to-end Optimized Image Compression

End-to-end Optimized Image Compression
Johannes Ballé, Valero Laparra, Eero P. Simoncelli
https://arxiv.org/abs/1611.01704
  • PriorAutoencoder: base class for implementing prior autoencoder architectures.
  • FactorizedPriorAutoencoder

High-Fidelity Generative Image Compression

High-Fidelity Generative Image Compression
Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson
https://arxiv.org/abs/2006.09965
  • HiFiCEncoder
  • HiFiCDiscriminator
  • HiFiCGenerator

Variational Image Compression with a Scale Hyperprior

Variational Image Compression with a Scale Hyperprior
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
https://arxiv.org/abs/1802.01436
  • HyperpriorAutoencoder: base class for implementing hyperprior autoencoder architectures.
  • MeanScaleHyperpriorAutoencoder
  • ScaleHyperpriorAutoencoder

API Changes

  • neuralcompression.functional.hsv2rgb is now neuralcompression.functional.hsv_to_rgb.
  • neuralcompression.functional.learned_perceptual_image_patch_similarity is now neuralcompression.functional.lpips.

Acknowledgements

Thank you to the following people for their advice:

Release to PyPI

19 Jul 18:46

Choose a tag to compare

This releases the project to PyPI and tests the GitHub action for releases.