Custom operations

The Hypervision SDK provides facilities for running custom code inside its pipeline system. This means that you can take advantage of the same lazy calculations and efficient streaming, that the built-in operations use.

In order to ease the use of these custom operations, the SDK provides a number of helper functions, including @hsi.util.operation and hsi.util.predictor().

Important

The helper functions are currently only available in the Python extension. They may be added to the C library later if there is a need for high-level integrations.

A custom operation can be created by using the hsi.HSImage.ufunc() method like so:

Warning

The C API is very verbose due to closures not being natively supported. The API will likely change in the future to something more suited to use from C.

def my_function(plane: np.ndarray) -> np.ndarray:
    return plane.mean(axis=1, keepdims=True)

# Apply `my_function` to each plane of the image.
res = img.ufunc(my_function)

# The resulting image uses the same lazy pipeline as the built-in operations.
res.array_plane(200, hsi.bands)

# When resolving the entire image, the function is applied on a plane-by-plane basis.
out = res.to_numpy()
#include <assert.h>
#include "hv-sdk.h"
#include <stdio.h>

typedef struct {
    int ref_count;
    int count;

    hv_array_t *array;
} callback_env;

hv_array_t *callback(callback_env *env_ptr, hv_array_t *plane) {
    printf("Plane no: %d\n", env_ptr->count++);
    return plane;
}

void release_fn(callback_env *env_ptr) {
    printf("release %d", env_ptr->ref_count);
    env_ptr->ref_count--;
    if (env_ptr->ref_count == 0) {
        hv_array_free(&env_ptr->array);
    }
}

void retain_fn(callback_env *env_ptr) {
    env_ptr->ref_count++;
    printf("retain %d", env_ptr->ref_count);
}

int main() {
    // Loads PAM file due to the extension
    const char *path = "resources/docs/ex1.pam";
    hv_hsi_file_t *file;
    assert(!hv_hsi_file_open(path, &file));

    // Convert the file to an image (transfers ownership - file is NULL after this)
    hv_hs_image_t *img = hv_hsi_file_to_image(&file);

    hv_shape_meta_t *shape_meta;
    assert(!hv_hs_image_shape(img, &shape_meta));

    slice_ref_size_t sh = {
        .len = 3,
        .ptr = (size_t[3]){
            *hv_shape_meta_lines(shape_meta),
            *hv_shape_meta_samples(shape_meta),
            *hv_shape_meta_bands(shape_meta)
        }
    };

    callback_env env = {
        .ref_count = 0,
        .array = hv_array_zeros(HV_DTYPE_U8, sh)
    };
    ArcDynFn1_hv_array_ptr_hv_array_ptr_t closure = {
        .env_ptr = &env,
        .retain = (void (*)(void *)) &retain_fn,
        .release = (void (*)(void *)) &release_fn,
        .call = (hv_array_t* (*)(void *, hv_array_t *)) &callback
    };

    hv_hs_image_t *res = hv_hs_image_op(img, closure);

    hv_array_t *plane;
    hv_hs_image_array_plane(res, 10, HV_AXIS_BANDS, &plane);

    hv_hs_image_free(&res);
    hv_array_free(&plane);
}

This method does not account for different interleaves and always operates on the inner two dimensions of the image.

General operations

The SDK provides a decorator hsi.util.operation(), that ensures a desired interleave and wraps a function so it can be used directly with hsi.HSImage:

from hsi.util import operation

@operation(hsi.bip)
def my_function(plane: np.ndarray) -> np.ndarray:
    return plane.mean(axis=1, keepdims=True)

# The call now ensures that the interleave is consistent.
res = my_function(img)

Machine learning models

The SDK also provides a helper function hsi.util.predictor() for adapting certain scikit-learn like ML models to hsi.HSImage s. It support models that use individual spatial pixels as input.

from hsi.util import predictor
from sklearn.linear_model import LinearRegression

img = hsi.open(<path>)

model = LinearRegression()
model.fit(...) # Training step

hsimage_predictor = predictor(model)

res = hsimage_predictor(img)

# Again, the operation is lazy and is only applied when necessary.
out = res.to_numpy()

# This means that operations like slicing will reduce the number of computations.
out_small = res[50:60, :, :].to_numpy()