Reference

Config

Camera configuration.

quarklib.config.get(cam, *, controls=None, format=True, crop=True, framerate=True, protect_controls=True)

Get camera configuration.

Parameters:
  • cam (qamlib.Camera) – The camara to save the configuration from.

  • controls (list[str] | None) – Names of the controls to save, if None then all are saved.

  • format (bool) – Save the format.

  • crop (bool) – Save the crop(s).

  • framerate (bool) – Save the framerate.

  • protect_controls (bool) – Protect sensor specific controls (ADC Gain, Black Level and VRAMP).

Raises:

ValueError – If a non-existent control is specified in the configuration.

Returns:

The camera configuration

Return type:

dict[str, Any]

quarklib.config.save(cam, path, *, controls=None, format=True, crop=True, framerate=True, protect_controls=True)

Save camera configuration.

Parameters:
  • cam (qamlib.Camera) – The camara to save the configuration from.

  • path (str) – File path to put the saved configuration.

  • controls (list[str] | None) – Names of the controls to save, if None then all are saved.

  • format (bool) – Save the format.

  • crop (bool) – Save the crop(s).

  • framerate (bool) – Save the framerate.

  • protect_controls (bool) – Protect sensor specific controls (ADC Gain, Black Level and VRAMP).

Return type:

None

quarklib.config.set(cam, config, *, controls=None, format=True, crop=True, framerate=True, protect_controls=True)

Set camera configuration.

Parameters:
  • cam (qamlib.Camera) – The camera to load the configuration into.

  • config (dict[str, Any]) – The camera configuration to set.

  • controls (list[str] | None) – Names of the controls to load, if None then all are saved.

  • format (bool) – Load the format.

  • crop (bool) – Load the crop(s).

  • framerate (bool) – Load the framerate.

  • protect_controls (bool) – Protect sensor specific controls (ADC Gain, Black Level and VRAMP).

Raises:
  • ValueError

    • When setting the crop value, if the configured crop is not a rectangle

      of four points.

    • When specifying controls, if config does not contain any.

    • When trying to set a control that does not exist on the camera.

    • When trying to set a control that is not included in config.

    • When trying to set a control that does not specify a value to set.

    • When using a value with a payload to set a control that does not

      support payloads.

    • When trying to set the trigger sequence with an incomplete value.

    • When trying to set an unsupported control type.

  • TypeError – When trying to set an integer control with a non-integer value.

Returns:

Nothing.

quarklib.config.load(cam, path, *, controls=None, format=True, crop=True, framerate=True, protect_controls=True)

Load camera configuration.

Parameters:
  • cam (qamlib.Camera) – The camera to load the configuration into.

  • path (str) – File path to the saved configuration

  • controls (list[str] | None) – Names of the controls to load, if None then all are saved.

  • format (bool) – Load the format.

  • crop (bool) – Load the crop(s).

  • framerate (bool) – Load the framerate.

  • protect_controls (bool) – Protect sensor specific controls (ADC Gain, Black Level and VRAMP).

Raises:

json.decoder.JSONDecodeError – If the json config fails to decode.

Return type:

None

quarklib.config.reset_controls(cam, controls=None, protect_controls=True)

Reset controls to their default value.

Parameters:
  • cam (qamlib.Camera) – The camera to reset controls for.

  • controls (list[str] | None) – The controls names to reset.

  • protect_controls (bool) – Protect sensor specific controls (ADC Gain, Black Level and VRAMP).

Return type:

None

LUT

LUT generation functions.

This module contains functions for generating some common look up tables (LUTs). They assume that all values in 0, 1, ..., 2^bits - 1 are valid, with bits being the chosen bit depth (default is 12).

They also support chaining by using the base argument, for some functions chaining might not make that much sense but it is still possible (eg. linear lut will not be linear if the base is not linear).

quarklib.lut.gamma(value, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create a LUT for gamma correction.

Parameters:
  • value (float) – The gamma value.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError – If the generated LUT contains values too large to represent in a bits-bit integer.

Return type:

ndarray

quarklib.lut.intensity_stretch(min_value, max_value, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create quantization LUT.

This LUT will stretch the values between min_value and max_value to the full value range 0 to 2^bits-1.

Parameters:
  • min_value (int) – Minimum value.

  • max_value (int) – Maximum value.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError

  • If min_value is not between 0 and max_value.

  • If max_value is not between 0 and the maximum value for an

    bits-bit integer.

Return type:

ndarray

quarklib.lut.intensity_stretch_percentage(min_percentage, max_percentage, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create quantization LUT.

This LUT will stretch the values between min_percentage and max_percentage to the full value range 0 to 2^bits-1.

Parameters:
  • min_percentage (float) – Minimum value [%].

  • max_percentage (float) – Maximum value [%].

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError

  • If min_percentage is not between 0 and max_percentage.

  • If max_percentage is not between 0 and 100.

Return type:

ndarray

quarklib.lut.invert(*, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create inverted LUT.

Parameters:
  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Return type:

ndarray

quarklib.lut.linear(degree, y_intersection, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create linear LUT.

Parameters:
  • degree (float) – The slope in degrees.

  • y_intersection (int) – The value at which the y-axis is intersected (x=0).

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Return type:

ndarray

quarklib.lut.linear_percentage(degree, y_intersection_percentage, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create linear LUT.

Parameters:
  • degree (float) – The slope in degrees.

  • y_intersection_percentage (int) – The percentage at which the y-axis is intersected (x=0).

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Return type:

ndarray

quarklib.lut.piecewise_linear(points, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create LUT with multiple linear pieces.

This generates a LUT that is linear between the given points.

Parameters:
  • points (list[tuple[int, int]]) – A list of points to interpolate between (x, y). The points should be sorted in accending order by x value.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError

  • If any coordinate x or y in points is not between 0 and 100.

  • If the x coordinates in points are not non-decreasing.

Return type:

ndarray

quarklib.lut.piecewise_linear_percentage(points, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create LUT with multiple linear pieces.

Parameters:
  • points (list[tuple[int, int]]) – A list of points to interpolate between (x, y). The values should be as a percentage of the max value.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError – If any coordinate x or y in points is not between 0 and 100.

Return type:

ndarray

quarklib.lut.quantization(steps, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create quantization LUT.

Parameters:
  • steps (int) – The number of steps.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError – If steps is not between 1 and the maximum value of a bit-bit integer.

Return type:

ndarray

quarklib.lut.range_value(low, high, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create range LUT.

This generates a lut for which values below low and above high` are set to ``0 and all other values are set to 2^bits

Parameters:
  • low (int) – The lower threshold.

  • high (int) – The upper threshold.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError

  • If low is not between 0 and high.

  • If high is not in the valid range of a bit-bit integer.

Return type:

ndarray

quarklib.lut.range_percentage(low, high, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create range LUT.

This generates a lut for which values below low and above high` are set to ``0 and all other values are set to 2^bits

Parameters:
  • low (float) – The lower threshold, as a percentage of the maximum value (2^bits-1).

  • high (float) – The upper threshold, as a percentage of the maximum value (2^bits-1).

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError

  • If low is not between 0 and high.

  • If high is not between 0 and 100.

Return type:

ndarray

quarklib.lut.s_curve(mid_point, force, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create S-curve LUT.

Parameters:
  • mid_point (int) – Where on the X axis the midpiont should be.

  • force (float) – The “bendiness” of the curve, as a percentage.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError

  • If mid_point is not in the valid range of an bit-bit integer.

  • If force is not between 0 and 100.

Return type:

ndarray

quarklib.lut.s_curve_percentage(mid_point, force, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create S-curve LUT.

Parameters:
  • mid_point (int) – Where on the X axis the midpiont should be as a percentage.

  • force (float) – The “bendiness” of the curve, as a percentage.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError – If mid_point is not between 0 and 100

Return type:

ndarray

class quarklib.lut.Threshold(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

Threshold types for a threshold LUT.

BINARY = 1
TRUNCATE = 2
TO_ZERO = 3
TO_ZERO_INVERTED = 4
quarklib.lut.threshold(value, type, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create threshold LUT.

Parameters:
  • value (int) – The threshold value.

  • type (Threshold) – The threshold type.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError – If value is not in the valid range for an bits-bit integer.

Return type:

ndarray

quarklib.lut.threshold_percentage(percentage, type, *, base=None, bits=12, dtype=<class 'numpy.int32'>)

Create threshold LUT.

Parameters:
  • percentage (float) – The threshold percentage.

  • type (Threshold) – The threshold type.

  • base (ndarray | None) – Base LUT, if None the identity LUT is used as base.

  • bits (int) – The number of bits that the LUT should use, eg 8 bits results in a LUT of size 2^8.

  • dtype (dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) – The NumPy dtype for the resulting LUT.

Raises:

ValueError – If percentage is not between 0 and 100.

Return type:

ndarray

Image

Image functions.

quarklib.image.save_pnm(name, frame)

Save frame as PNM file.

NOTE: For 16 bit frames PNM uses big endian, so we may have to do a byte-swap, this can decrease performance, but it is possible to improve the byte-swap speed by doing it in-place frame = frame.byteswap(True).newbyteorder() before calling save_pnm()

Parameters:
  • name (str) – The file name to save to.

  • frame (ndarray) – The frame to save.

Return type:

None

ISP

Qtec ISP related functions.

quarklib.isp.create_trigger_sequence(cam, exposures, *, flash_times=None, set_control=True)

Create a trigger sequence with minimal frame delay.

Parameters:
  • cam (qamlib.Camera) – The qamlib.Camera class for which the trigger sequence is for.

  • exposures (list[int]) – A sequential list of the exposure times for the trigger sequence.

  • flash_times (list[int] | None) – Optional flash_times for each exposure.

  • set_control (bool) – Whether to actually set the “Trigger Sequence” control for cam.

Raises:

ValueError

  • If the camera does not support setting the trigger sequence.

  • If flash_times is defined and is not the same length as exposures.

Return type:

None

class quarklib.isp.AutoWhiteBalance(cam, crop=None)

Bases: ABC

Base class for automatic white balance methods.

White balance requires the pixel format to be in either RGB(A) or BGR(A) formats.

Parameters:
  • cam (Camera) – Camera used for auto white balance

  • crop (tuple[int, int, int, int] | None) – Optional frame crop

Raises:

ValueError

  • If the pixel format is invalid.

  • If the camera does not support digital gain mapping.

property gains: ndarray[Any, dtype[float32]]

Get normalized camera gain values as a list.

Returns:

List of normalized gain values in range [0, 4]

abstract iterate(frame, *, fail_ratio=0.3, pixel_value_interval=(40, 220))

Perform an iteration of the white balancing algorithm.

The method returns the angular error of the white balance of the frame prior to correction. See angular_error() for details on the calculation.

Parameters:
  • frame (ndarray) – The input frame

  • fail_ratio (float) – If the total fraction of pixels removed due to either low pixel values or high saturation exceeds this value, the method fails and raises a RuntimeError.

  • pixel_value_interval (tuple[int, int]) – Only pixels with average values within this interval are included in the calculation.

Raises:

RuntimeError – An error is raised either due to over saturation (see saturation_fail_ratio) or if the target angular error is not reached within the maximum allowed number of iterations.

Returns:

The angular error of the mean pixel prior to correction.

Return type:

float

evaluate(frame)

Evaluate the current white balance.

The method calculates the angular error for the provided frame.

Parameters:

frame (ndarray) – Input image.

Returns:

The saturation error as a fraction ([0, 1]) and in degrees.

Return type:

tuple[float, float]

class quarklib.isp.GreyWorld(cam, crop=None, *, target_value=None)

Bases: AutoWhiteBalance

Estimate gain values using the Grey World assumption.

The algorithm uses a fixed color channel to determine the target luminosity for the other channels. The channel can either be set manually or be automatically determined during initialization.

Parameters:
  • cam (qamlib.Camera) – Camera used for auto white balance

  • crop (tuple[int, int, int, int] | None) – Optional frame crop

  • target_value (int | None) – Desired target value. By default, a number of sample frames are captured and the brightest mean channel value is used as the target. Note that this resets the current gain values.

iterate(frame, *, fail_ratio=0.3, pixel_value_interval=(40, 220))

Perform an iteration of the grey world white balancing algorithm.

The method updates the gains using the following equation:

gains_new = target_value * (gains_previous / mean_pixel_value),

where mean_pixel_value is the average pixel value over the frame (or crop if specified), target_value is the target value for all color channels, and gains_previous are the current gain values for each channel.

The method returns the angular error of the white balance of the frame prior to correction. See angular_error() for details on the calculation.

Parameters:
  • frame (ndarray) – The input frame

  • fail_ratio (float) – If the total fraction of pixels removed due to either low pixel values or high saturation exceeds this value, the method fails and raises a RuntimeError.

  • pixel_value_interval (tuple[int, int]) – Only pixels with average values within this interval are included in the calculation.

Raises:

RuntimeError – An error is raised either due to over saturation (see saturation_fail_ratio) or if the target angular error is not reached within the maximum allowed number of iterations.

Returns:

The angular error of the mean pixel prior to correction.

Return type:

float

quarklib.isp.one_shot_auto_white_balance(cam=None, white_balance_method=None, target_error=0.01, max_iterations=10, *, num_average=10, fail_ratio=0.3, pixel_value_interval=(40, 220))

Perform estimation.

Estimates gains iteratively using the mean pixel value a number of averaged frames. The method uses the brightest color channel as the target value. The gains are then estimated linearly from the observed pixel values in each iteration.

Image saturation is estimated at the start to determine whether the image is suitable for determining white balance. If not, an error is raised. Each iteration averages several frames to improve stability. Over saturated pixels are removed before calculating the mean pixel value. The saturation value is calculated using the linear_saturation() function.

As soon as the method reaches the target value, it returns. See angular_error() for details on the calculation.

Parameters:
  • cam (qamlib.Camera | None) – Camera to use. Of omitted, the camera from the white_balance_method is used.

  • white_balance_method (AutoWhiteBalance | None) – The white balance algorithm to use. If omitted, a GreyWorld method is created using the provided camera.

  • target_error (float) – Target saturation error [0, 1].

  • max_iterations (int) – Maximum number of iterations

  • num_average (int) – The number of frames to average in each iteration

  • fail_ratio (float) – If the total fraction of pixels with saturation above the threshold exceeds this value, the method fails and a RuntimeError is raised

  • pixel_value_interval (tuple[int, int]) – Only pixels with average values within this interval are included in the calculation.

Raises:
  • ValueError – When either cam or white_balance_method is None.

  • RuntimeError – An error is raised either due to over saturation (see saturation_fail_ratio) or if the target angular error is not reached within the maximum allowed number of iterations.

Returns:

The angular error

quarklib.isp.angular_error(pixel)

Calculate the saturation of a pixel in RGB or BGR space.

The saturation is calculated as the angle between the measured pixel and the grey vector ([1., 1., 1.]). The error is returned as a value in the interval [0, 1] where 1 is the worst-case error (~54.4 degrees). The absolute value, in degrees, is also returned.

Parameters:

pixel (ndarray) – The measured pixel value (usually a mean)

Raises:

ValueError – If the pixel value is 0.

Returns:

The error fraction and the absolute error in degrees.

Return type:

tuple[float, float]

quarklib.isp.calibrate_black_level(cam, *, target_value=None, threshold=0.01)

Calibrate Black Level for a camera.

This function will average 5 frames and count the number of pixels that are below or equal to target_value, and adjust the black level until there are more than threshold specifies at that target.

Parameters:
  • cam (qamlib.Camera) – The qamlib.Camera object to do the calibration to.

  • target_value (float | None) – The target brightness value, if not set, it is chosen according to the sensor type.

  • threshold (float) – The fraction of pixels that should be <= target_value before the calibration stops.

Raises:

ValueError

  • If using a camera with an unsupported sensor type. Currently, IMX,

    GMAX, and CMV are supported.

  • If the camera does not support a greyscale format.

Return type:

None

class quarklib.isp.AutoExposure(cam, *, crop, num_average)

Bases: object

Auto exposure base class.

This is the base class for auto exposure classes. The classes only support certain known formats: BGR3, BGR4, HSV, RGB3, RGB4, GREY, GREY16.

Parameters:
  • cam (qamlib.Camera) – The qamlib.Camera object to do the auto exposure on.

  • crop (tuple[int, int, int, int] | None) – The crop of the frame to focus the auto exposure algorithm on.

  • num_average (int) – The number of frames to average over before calculating a new exposure.

update_settings()

Get updated camera settings used for auto exposure.

Such as maximum exposure, and current format. This function needs to be called if framerate, or format has been changed after creating the object.

Return type:

None

iterate(meta, frame)

Do one iteration of auto exposure.

This will only change exposure every num_average frames.

NOTE: This function is only valid for sub-classes.

Parameters:
  • meta (qamlib.FrameMetadata)

  • frame (ndarray[Any, dtype[uint8]])

Return type:

float | None

quarklib.isp.one_shot_auto_exposure(cam, error_percentage=0.3, max_iterations=200, auto_exposure=None)

One time auto exposure.

This function will do a round of auto exposure and return either when the targeted precision is reached, if max_iterations is reached, or if the exposure value reaches either the minimum or the maximum value.

Parameters:
  • cam (qamlib.Camera) – The qamlib.Camera to do auto exposure on.

  • error_percentage (float) – The value of change in exposure below which is accepted as the auto exposure being finished.

  • max_iterations (int) – The maximum number of frames to use for auto exposure, meaning that the auto exposure is stopped when we reach this limit.

  • auto_exposure (SimpleAutoExposure | None) – AutoExposure object to use, if None it defaults to SimpleAutoExposure

Raises:

ValueError – If error_percentage is not between 0 and 100.

Return type:

None

class quarklib.isp.SimpleAutoExposure(cam, *, crop=None, num_average=3, saturation_target_ppm=100, target_value=None, down_scale=4, max_allowed_exposure_time=None)

Bases: AutoExposure

Simple auto exposure.

This auto exposure class will try to saturate the image up to target_value. When increasing the exposure it uses the maximum pixel value in the frame to calculate how much to increase exposure.

Parameters:
  • cam (qamlib.Camera) – qamlib.Camera object to do auto exposure on.

  • crop (tuple[int, int, int, int] | None) – The part of the frame to look at for auto exposure (left, top, width, height)

  • num_average (int) – Number of frames to average before calculating new exposure

  • saturation_target_ppm (float) – How many pixels that should be “saturated” (above target_value), is parts per milion (1 pixel per megapixel)

  • target_value (int | None) – Target value. If None then the maximum value of the frame datatype -1 is used.

  • down_scale (int) – The amount to down scale the frame in both height and width, this is done to speed up the calculations.

  • max_allowed_exposure_time (int | None) – The maximum allowed exposure time. Overrules the exposure time set by fps as long as its (<= max exposure time for the given fps)

iterate(meta, frame)

Do one iteration of auto exposure.

This function uses the metadata make sure that the frame is not older than the last time the exposure was changed.

Parameters:
  • meta (qamlib.FrameMetadata) – The frame metadata.

  • frame (ndarray[Any, dtype[uint8]]) – The frame to look at.

Raises:
  • ValueError – If the pixelformat is not supported (currently raw bayer formats are not supported).

  • ValueError – If the calculated exposure is above or below the boundary values.

Returns:

Calculated exposure ratio

Return type:

float | None

Color Calibration Matrix

Color Conversion Module.

This module provides an API for performing color channel transformations as supported by the AFE CCM module.

The module provides functions for creating color correction transformations using the ColorCheckerClassic color reference.

class quarklib.ccm.Solver(*args, **kwargs)

Bases: Protocol

Solver interface.

Produces a 4x5 color transformation matrix (for use in the CCM module) using two color references.

quarklib.ccm.linear_color_correction_rgb(source, target, offset=False)

Least squares optimization.

The method finds the minimal distance achievable between transformed points and target points. Since the algorithm operates in RGB, it is unlikely that the result is also the optimal perceived color distance. The algorithm is therefore best suited to machine vision tasks where the RGB color space is used.

Parameters:
  • source (ColorReference) – The source color reference.

  • target (ColorReference) – The target color reference.

  • offset (bool) – Whether to use the offset component of the CCM.

Returns:

The optimized color conversion matrix.

Return type:

ndarray

class quarklib.ccm.ColorChecker(frame, margin=0.024, min_area=2000, box_shape=(6, 4))

Bases: object

Detector for the ColorChecker reference board.

Parameters:
  • frame (ndarray) – Image to find the board in.

  • margin (float) – The proportional margin between each color box. The default value should generally be appropriate.

  • min_area (int) – Minimum box area in pixels. This might have to be modified if the board is placed far from the sensor.

  • box_shape (tuple[int, int]) – The shape (dimensions) of the board grid. The default value is valid for the ColorCheckerClassic board.

reference(frame)

Measure the detected board colors and return the reference.

Parameters:

frame (ndarray) – Image to measure the colors from. This may be the same frame as used in the constructor but does not have to be, as long as the board or camera has not been moved.

Returns:

The measured reference.

Return type:

ColorReference

visualize(frame, reference)

Visualize the detected board and reference.

This method is useful to visualize the perceived quality of a color-corrected image when compared to the ground truth reference.

Parameters:
  • frame (ndarray) – Image to draw the visualization onto.

  • reference (ColorReference) – Color reference painted onto the image to visualize the color difference.

Returns:

class quarklib.ccm.ColorReference(board, size, source, colors)

Bases: object

Represents a set of reference colors identified by index.

Parameters:
  • board (str) – An identifier for the physical source reference.

  • size (tuple[int, int]) – Board grid size as (width, height).

  • source (str) – Reference source location, .e.g. the official reference or a combination of image sensor and capture parameters.

  • colors (list[ColorDefinition]) – The list of references. Each element describes a particular color.

board: str
size: tuple[int, int]
source: str
colors: list[ColorDefinition]
classmethod from_dict(value)

Create reference from dictionary.

The dictionary must contain

Parameters:

value (dict) – Dictionary containing as keys the fields specified in the class constructor.

Returns:

The reference object.

Return type:

ColorReference

classmethod from_file(path)

Create reference directly from a file.

Parameters:

path (str) – Path to a valid reference file (generated using ColorCheckerReference.save()).

Returns:

The reference object.

Return type:

ColorReference

save(path)

Save reference to the specified path.

Parameters:

path (str) – JSON file path (must end in the .json extension)

rgb()

Fetch reference RGB colors as an Nx3 array where N is the number of colors.

Returns:

The array of colors.

Return type:

ndarray[Any, dtype[float64]]

lab()

Fetch reference LAB colors as an Nx3 array where N is the number of colors.

Raises:

ValueError – If the color reference does not contain a LAB value.

Returns:

The array of colors.

Return type:

ndarray

to_dict()

Convert the object into a dictionary suitable for json serialization.

Returns:

The dictionary.

mse(other)

Mean squared error of each color channel.

Parameters:

other (ColorReference) – The comparison reference.

Returns:

A 1x3 array containing the mse of each channel.

Return type:

float

se(other)

Squared error of each color patch.

Parameters:

other (ColorReference) – The comparison reference.

Returns:

An Nx3 array containing the squared errors of each color.

Return type:

ndarray[Any, dtype[float64]]

distance_error(other)

Distance error of each color patch.

Parameters:

other (ColorReference) – The comparison reference.

Returns:

An Nx1 array of the Euclidean distance errors of each color.

Return type:

ndarray[Any, dtype[float64]]

angular_error(other)

Angular error of each color patch.

The angular error

Parameters:

other (ColorReference)

Returns:

quarklib.ccm.perform_color_correction(cam, target, solver=<function linear_color_correction_rgb>)

Calculate and set the color conversion transformation of a camera.

Parameters:
  • cam (Camera) – Target camera.

  • target (ColorReference) – Target color reference.

  • solver (Solver) – Solver to use for finding the transformation matrix.

Raises:

ValueError – If the pixel format is not RGB.

Camera

Camera module.

This module provides a wrapper for qamlib.Camera which is augmented with a high-level control interface for the underlying camera controls.

class quarklib.camera.PixelFormat(fourcc, channels, bits)

Bases: object

PixelFormat helper class.

Parameters:
  • fourcc (str)

  • channels (list[str])

  • bits (int)

fourcc: str
channels: list[str]
bits: int
classmethod from_qamlib(format)

Create from a qamlib pixel format.

Parameters:

format (qamlib.PixelFormat)

Return type:

Self

classmethod from_fourcc(fourcc)

Parse from V4L2 fourcc code.

The current supported values are: GREY, Y16, QG08, QG16, RGB3, RGB4,

BGR3, BGR4, GBRG, GRBG, RGGB, BA81.

Parameters:

fourcc (str) – V4L2 fourcc code.

Raises:

ValueError – When an invalid fourcc code is provided.

Returns:

The pixel format

Return type:

Self

property n_channels

Get the number of channels.

Returns:

The number of channels.

get_permutation_to(other)

Calculate channel permutation between rgb/bgr formats.

The function calculates the necessary permutation for the specified conversion.

Parameters:

other (Self) – The target pixel format.

Raises:

ValueError – If self or other has an invalid pixel format.

Returns:

List of channel indices describing where the channel in the input format is placed in the output format.

class quarklib.camera.Camera(*args, **kwargs)

Bases: Camera

High-level wrapper for qamlib.Camera.

This wrapper provides the same basic interface as qamlib.Camera but adds a high-level interface to the camera’s controls. The controls are predefined by default but can be customized. The default control set is specified in quarklib.camera.controls.definitions.qtec_default_controls_builder.

Parameters:
  • path (str | int | None) – File path or device number for qamlib.Camera. If empty, the default device is used.

  • control_builder (GroupControlDefinition) – The builder to use for creating the camera controls. By default, a predefined builder is used.

property controls: GroupControl

High-level controls access point.