HV SDK
Rust based library with HSI related functions, with bindings available for other programming languages (Python and C bindings currently available).
The library makes it easy to work with different HSI sources as it supports input and output of multiple datacube file formats (currently PAM, ENVI and TIFF as well as live camera data. In this way it is easy to move from a proof of concept phase to a final solution, as well as being able to debug workflows using saved data.
The HV SDK is used as the backend for the HSI related functionality of the HV Explorer. This makes it easy to take a workflow from the HV Explorer and implement it as a standalone program using the HV SDK library.
The HV SDK library provides a generic interface for working with HSI files and live camera data. Using lazy operations and streaming, it optimizes memory and cpu usage for defined operations.
The HSI Tools library is still under development: new features and improvements/bug fixes are constantly being added.
The alpha version has been released in March 2025 and live data capture support was added in September 2025.
Python >= 3.10
RAM: 8Gb (minimum recommended)
Installation
- Python package:
pip install qtec-hv-sdk --index-url https://gitlab.com/api/v4/projects/67863505/packages/pypi/simple
Tutorial
See Recipes from the official docs for more examples.
Also check the Examples section for more examples using the HV SDK.
Reading and writing datacubes
import hsi
img = hsi.open("path/to/file.hdr") # Expects ENVI file due to the extension
img.write("path/to/output.pam") # Writes PAM format
Changing the interleave type
See exporting datacubes under the HV Explorer tutorial for details on the relevance of using the correct interleave when writing datacubes.
import hsi
a = hsi.open("path/to/file.hdr").to_numpy_with_interleave(hsi.bil)
Reflectance calibration
import hsi
from hsi.preprocessing import make_reference, reflectance_calibration
img = hsi.open("path/to/file.hdr")
dark = hsi.open("path/to/dark_file.hdr")
# inline white reference
white_ref = make_reference(img[:100, :, :])
dark_ref = make_reference(dark)
reflectance = reflectance_calibration(img, white_ref, dark_ref)
Camera interface
import hsi
N_IMGS = 10
pam_filename = f"/tmp/_HSI_{current_datetime_filename()}.pam"
SAVE_CUBE = True
# Desired settings
EXP = 1000
FPS = 100
# Horizontal crop
H_START = 200
H_END = 300
# Bands
V_START = 0
V_END = 920
# ETH_B interface and default video device
cam = hsi.HSCamera("10.100.10.100", "/dev/qtec/video0")
# Get information
print(f"{cam.get_config()=}")
print(f"{cam.get_settings()=}")
print(f"{cam.get_crop()=}")
print(f"{cam.get_exposure()=}")
print(f"{cam.get_framerate()=} {cam.get_framerate_list()=}")
#print(f"Binning: {cam.get_horizontal_binning()}x{cam.get_vertical_binning()}")
print(f"{cam.get_bands()=}")
#print(f"{cam.get_wavelengths()=}")
# Set parameters
print(f"{cam.set_exposure(EXP)=}")
print(f"{cam.set_framerate(FPS)=}")
#cam.set_horizontal_binning(1)
#cam.set_vertical_binning(1)
print(f"{cam.set_horizontal_crop((H_START, H_END))=}")
print(f"{cam.set_bands([(V_START, V_END)])=}")
# Multiple band intervals (up to 8 regions):
#print(f"{cam.set_bands([(V_START1, V_END1), (V_START2, V_END2)])=}")
######### Datacube Capture
# Create a stream object
img = cam.to_hs_image()
# Configure the datacube size (N_IMGS)
datacube = img[:N_IMGS, :, :]
# Write to file or convert to numpy:
# triggers the streaming start
if SAVE_CUBE:
hsi.write(datacube, pam_filename)
else:
array = datacube.to_numpy()
# Also triggers the streaming start
#datacube.resolve()
######### Datacube Processing
...
See also the more complete example under the Quick Start section.
PCA
import hsi
from hsi.ml import pca_helper
import numpy as np
from sklearn.decomposition import PCA
img = hsi.open("path/to/file").as_dtype(hsi.float32)
def gen_select(img, n_samples_per_line=10):
"""Sample randomly (with the same number of samples per line) from the image."""
# Assumes BIL (doesn't work for BSQ/BIP)
def select(plane):
sample = np.random.choice(np.arange(plane.shape[1]), size=n_samples_per_line)
sample = plane[:, sample]
return sample
return img.ufunc(select) # Any Python function can be passed here
# Convert interleave type
img = img.to_interleave(hsi.Interleave.BIL)
# Get subsample from image in memory-efficient manner
s_out = gen_select(img).to_numpy()
s_out = s_out.transpose((0, 2, 1))
s_out = s_out.reshape((-1, s_out.shape[2]))
# Fit model
model = PCA(n_components)
model.fit(s_out)
hs_model = pca_helper(model)
# Here, the prediction function is created.
out = hs_model(img)
# The calculation is only applied when requested, similar to other operations
result = out.to_numpy()
See also the more complete example under the Examples section.
Support
Report bugs by writing an email to: hv-sdk-support