TrainΒΆ
- quantize_model(
- frame_encoder: FrameEncoder,
- frame: Frame,
- frame_encoder_manager: FrameEncoderManager,
Quantize a
FrameEncoder
compressing aFrame
under a rate constraintlmbda
and return it.This function iterates on all the neural networks sent from the encoder to the decoder, listed in frame_encoder.coolchic_encoder.modules_to_send. For each module \(m\), we want to find the most suited pair of quantization steps for the weight and the biases \((\Delta_w^m, \Delta_b^m)\).
To do so, a greedy search is used where we quantize the weights and biases using all the possible pairs of quantization steps, and we compute the :doc`usual loss function <./loss>`. The loss measures the impact of the NN quantization steps \((\Delta_w^m, \Delta_b^m)\) on the MSE / rate of the decoded image and the rate of the NN.-
In the end, we select the pair of quantization step minimizing the loss:
\[\begin{split}(\Delta_w^m, \Delta_b^m) = \arg\min ||\mathbf{x} - \hat{\mathbf{x}}||^2 + \lambda (\mathrm{R}(\hat{\mathbf{x}}) + \mathrm{R}_{NN}), \text{ with } \begin{cases} \mathbf{x} & \text{the original image}\\ \hat{\mathbf{x}} & \text{the coded image}\\ \mathrm{R}(\hat{\mathbf{x}}) & \text{A measure of the rate of } \hat{\mathbf{x}} \\ \mathrm{R}_{NN} & \text{The rate of the neural networks} \end{cases}\end{split}\]Then we quantize the next module to be sent.
Warning
The parameter
frame_encoder_manager
tracking the encoding time of the frame (total_training_time_sec
) and the number of encoding iterations (iterations_counter
) is modified ** in place** by this function.- Parameters:
frame_encoder (FrameEncoder) β Model to be compressed.
frame (Frame) β Original frame to code, including its references.
frame_encoder_manager (FrameEncoderManager) β Contains (among other things) the rate constraint \(\lambda\) and description of the warm-up preset. It is also used to track the total encoding time and encoding iterations. Modified in place.
- Returns:
Model with quantized parameters.
- Return type: