Skip to main content

Examples

Example scripts related to the Hypervision HSI cameras.

Showcasing for example: loading of hyperspectral data files, capture of datacubes from Hypervision HSI cameras and reflectance calibration.

HV SDK

The HV SDK is our high-level interface to HSI related functionality with both Python and C/C++ APIs.

You can find a number of usage examples (as well as the API reference) here.

Under development

The HSI Tools library is still under development: new features and improvements/bug fixes are constantly being added.

The alpha version has been released in March 2025 and live data capture support was added in September 2025.

Currently both python and C bindings are available.

Data capture

Refer to the HV SDK section and the quick start guide for complete examples.

PCA

import hsi
from hsi.preprocessing import make_reference, reflectance_calibration
import numpy as np
from sklearn.decomposition import PCA

# For showing the results
import matplotlib.pyplot as plt

# Input data
img = hsi.open("HSI_data/nuts/mix1_1296x1000x900_imageCube.pam")
dark = hsi.open("HSI_data/nuts/dark_ref_1296x100x900_imageCube.pam")
white = hsi.open("HSI_data/nuts/white_ref_1296x100x900_imageCube.pam")

CHANNEL = 450

if img.header.interleave == hsi.Interleave.BSQ:
i = img.to_numpy()[CHANNEL,:,:]
elif img.header.interleave == hsi.Interleave.BIL:
i = img.to_numpy()[:,CHANNEL,:]
else:
i = img.to_numpy()[:,:,CHANNEL]
plt.figure(figsize=(10, 8))
plt.imshow(i, cmap='gray')
plt.title(f"Visualization of Image at band {CHANNEL}")
plt.xlabel("Width")
plt.ylabel("Height")
plt.colorbar(label='Pixel Intensity')

# Reflectance Calibration
dark_ref = make_reference(dark)
white_ref = make_reference(white)
reflectance = reflectance_calibration(img, white_ref, dark_ref, clip=True)

if reflectance.header.interleave == hsi.Interleave.BSQ:
r = reflectance.to_numpy()[CHANNEL,:,:]
elif reflectance.header.interleave == hsi.Interleave.BIL:
r = reflectance.to_numpy()[:,CHANNEL,:]
else:
r = reflectance.to_numpy()[:,:,CHANNEL]
plt.figure(figsize=(10, 8))
plt.imshow(r, cmap='gray')
plt.title(f"Visualization of Reflectance at band {CHANNEL}")
plt.xlabel("Width")
plt.ylabel("Height")
plt.colorbar(label='Reflectance')
plt.show()

################### PCA

def gen_select(img, n_samples_per_line=10):
"""Sample randomly (with the same number of samples per line) from the image."""

# Assumes BIL (doesn't work for BSQ/BIP)
def select(plane):
sample = np.random.choice(np.arange(plane.shape[1]), size=n_samples_per_line)
sample = plane[:, sample]
return sample

return img.ufunc(select) # Any Python function can be passed here

# Convert interleave type to BIL
reflectance = reflectance.to_interleave(hsi.Interleave.BIL)

# Get subsample from image in memory-efficient manner
s_out = gen_select(reflectance).to_numpy()
s_out = s_out.transpose((0, 2, 1))
s_out = s_out.reshape((-1, s_out.shape[2]))

# Fit model
n_components = 6
model = PCA(n_components)
model.fit(s_out)
loadings = model.components_
num_features = loadings.shape[1]

# --- Plotting the Loadings Grid ---

# Create a 3x2 grid of subplots
fig, axes = plt.subplots(3, 2, figsize=(15, 15))

# Use ax.flatten() to easily iterate through the subplots
axes = axes.flatten()

# Iterate through each of the 6 components and plot the loadings
for i in range(n_components):
# Select the i-th row from the components_ matrix
component_loadings = model.components_[i, :]

# Plot the loadings on the current subplot
axes[i].plot(range(num_features), component_loadings, color='blue', alpha=0.8)
axes[i].set_title(f"Loadings for PC{i + 1}")
axes[i].set_xlabel("Original Feature Index")
axes[i].set_ylabel("Loading Value")
axes[i].grid(True, linestyle='--', alpha=0.6)

# Adjust the layout to prevent titles from overlapping
plt.tight_layout()

# Show the final plot
plt.show()

# Create transformation
hs_model = pca_helper(model)
out = hs_model(reflectance)
res = out.to_numpy()

# Alternatively, if you don't want to use the pca_helper
#out = reflectance.dot(model.components_.T, hsi.bands)
#res = out.to_numpy()

pc_images = [res[:, i, :] for i in range(res.shape[1])]

# Create a 3x2 grid of subplots with a larger figure size for better viewing
fig, axes = plt.subplots(3, 2, figsize=(10, 15))

# Use ax.flatten() to iterate through the axes easily
axes = axes.flatten()

# Iterate through the 6 images and plot them
for i, ax in enumerate(axes):
im = ax.imshow(pc_images[i], cmap='gray')
ax.set_title(f"Principal Component {i + 1}")
ax.set_axis_off() # Hide the axes for a cleaner look

# Add a single color bar for the entire figure
# Use the last plotted image (im) to get the color map information
fig.subplots_adjust(right=0.85) # Make space for the color bar
cbar_ax = fig.add_axes([0.88, 0.15, 0.04, 0.7])
fig.colorbar(im, cax=cbar_ax, label='Component Value')

# Adjust layout and show the plot
plt.tight_layout(rect=[0, 0, 0.85, 1])
plt.show()

# Plot the first two principal components
plt.figure(figsize=(10, 8))
plt.scatter(pc_images[0], pc_images[1], alpha=0.5, s=1)
plt.title("PCA Scatter Plot of First Two Components")
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.grid(True)
plt.show()

Using ROIs from HV-Explorer

The HV Explorer has the option to save/load annotation data/ regions of interest (ROIs). To save annotation data click the Save annotations icon (floppy disk) on the right side of the top bar of the Spectra Settings tab.

Note that this example expects a "Label Property" called "type" to also be present together with the ROIs in order to group them in classes. In order to create a "Label Property" click on the Add property icon on the left side of the top bar of the Spectra Settings tab.

These saved annotations can the be loaded into a python script and used to mark the PCA regions they belong to and subsequently used for classification, as shown bellow.

from skimage.draw import polygon

...

# Create transformation
hs_model = pca_helper(model)
out = hs_model(reflectance)
res = out.to_numpy()

height, n, width = res.shape

pc_images = [res[:, i, :] for i in range(res.shape[1])]

# Annotation data from HV-Explorer
# made for nuts image: "mix1_1296x1000x900_imageCube.pam"
# TODO: load from file (JSON data)
annotations_data = {
"annotations":[
{"title":"almond1","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"polyline","points":[[447,333],[460,322],[486,324],[505,337],[513,355],[495,361],[475,354],[461,346]],"width":66,"height":39,"x":447,"y":322},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#ff7f0e","properties":{"type":"almond"}},
{"title":"almond2","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"polyline","points":[[395,291],[384,304],[400,317],[433,309],[458,286],[444,277],[415,285]],"width":73,"height":41,"x":384,"y":277},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#2ca02c","properties":{"type":"almond"}},
{"title":"almond3","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"ellipse","cx":521,"cy":301,"a":84.0824759720031,"b":34.99025710387992,"angle":8.030087480156281,"max_dist":84},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#d62728","properties":{"type":"almond"}},
{"title":"background","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"rectangle","x":415,"y":397,"width":138,"height":51},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#9467bd","properties":{"type":"background"}},
{"title":"walnut1","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"polyline","points":[[345,164],[324,143],[320,110],[345,101],[347,109],[366,110],[369,97],[398,98],[411,118],[412,145],[387,157],[374,145],[370,157],[351,147]],"width":92,"height":67,"x":320,"y":97},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#8c564b","properties":{"type":"walnut"}},{"title":"walnut2","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"polyline","points":[[213,181],[194,178],[185,162],[191,144],[210,133],[227,132],[222,120],[226,108],[254,114],[258,127],[268,137],[267,152],[261,159],[264,165],[252,178],[241,172],[236,158],[212,163]],"width":83,"height":73,"x":185,"y":108},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#e377c2","properties":{"type":"walnut"}},
{"title":"hasselnut1","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"polyline","points":[[484,548],[472,545],[464,528],[465,519],[482,511],[499,511],[508,517],[513,531],[500,544]],"width":49,"height":37,"x":464,"y":511},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#7f7f7f","properties":{"type":"hasselnut"}},
{"title":"hasselnut2","uuid":"8bd8c222-f29f-54f5-88b6-5c78a59f0f2c","descriptor":{"annot":"polyline","points":[[524,611],[537,621],[560,625],[579,614],[578,597],[560,586],[538,588],[525,594]],"width":54,"height":39,"x":524,"y":586},"image_path":"mix1_1296x1000x900_imageCube.pam","color":"#bcbd22","properties":{"type":"hasselnut"}}
],"property_desc":{"type":{"descriptor_type":"labeldescriptor","labels":["almond","hasselnut","walnut","background"],"colors":{"almond":"#8ff0a4","hasselnut":"#f9f06b","walnut":"#f66151","background":"#1a5fb4"}}}
}

principal_components = np.reshape(res.swapaxes(1, 2), (-1, n))

# --- Group Annotations and Get PCA Values ---
# Use a dictionary to store lists of PCA values for each group
grouped_pc_values = {}

for annot in annotations_data["annotations"]:
# The class name is now directly in the 'type' property
class_name = annot["properties"]["type"]
desc = annot["descriptor"]
mask = np.zeros((height, width), dtype=bool)

# Convert different annotation types to a boolean mask
if desc["annot"] == "rectangle":
x, y, w, h = desc["x"], desc["y"], desc["width"], desc["height"]
mask[y:y+h, x:x+w] = True
elif desc["annot"] == "ellipse":
cx, cy, a, b, angle = desc["cx"], desc["cy"], desc["a"]/2, desc["b"]/2, desc["angle"]
y_grid, x_grid = np.ogrid[:height, :width]
ellipse_mask = (((x_grid - cx) * np.cos(np.deg2rad(angle)) + (y_grid - cy) * np.sin(np.deg2rad(angle)))**2 / a**2 +
((x_grid - cx) * np.sin(np.deg2rad(angle)) - (y_grid - cy) * np.cos(np.deg2rad(angle)))**2 / b**2) <= 1
mask[ellipse_mask] = True
elif desc["annot"] == "polyline":
points = np.array(desc["points"])
rr, cc = polygon(points[:, 1], points[:, 0], shape=(height, width))
mask[rr, cc] = True

# Get the 1D PCA values for the annotated pixels
mask_1d = mask.ravel()
annotated_pc_values = principal_components[mask_1d]

# Append the PCA values to the correct list in our dictionary
if class_name not in grouped_pc_values:
grouped_pc_values[class_name] = []
grouped_pc_values[class_name].append(annotated_pc_values)

# Concatenate the lists to get a single array of values for each class
final_class_data = {
class_name: np.concatenate(pc_lists)
for class_name, pc_lists in grouped_pc_values.items()
}

# Flatten the list of arrays for each class into a single array
final_class_data = {
class_name: np.concatenate(pc_lists)
for class_name, pc_lists in grouped_pc_values.items()
}

# --- Plotting the data by class ---
plt.figure(figsize=(10, 8))

component_x = 0
component_y = 1

# Plot all pixels in the background
plt.scatter(principal_components[:, component_x], principal_components[:, component_y], c='gray', alpha=0.1, s=1)

# Get class names and colors from the property_desc for a robust plot
class_labels = annotations_data['property_desc']['type']['labels']
class_colors = annotations_data['property_desc']['type']['colors']

# Iterate through the classes and plot each group with its own color and label
for class_name in class_labels:
if class_name in final_class_data:
pc_values = final_class_data[class_name]
plt.scatter(pc_values[:, component_x], pc_values[:, component_y],
color=class_colors[class_name], s=10, label=class_name)

plt.title("PCA Scatter Plot with Annotated Regions")
plt.xlabel(f"Principal Component {component_x}")
plt.ylabel(f"Principal Component {component_y}")
plt.legend()
plt.grid(True)
plt.show()

# --- Part 2: Classify the Entire Image based on Annotations ---

# Calculate the centroid (mean) for each class
centroids = {
class_name: np.mean(pc_values, axis=0)
for class_name, pc_values in final_class_data.items()
}

# Create a mapping from class title to a number for plotting
class_names = sorted(centroids.keys())
class_map = {name: i + 1 for i, name in enumerate(class_names)}
centroid_array = np.array([centroids[name] for name in class_names])

# Vectorized classification
distances = np.linalg.norm(principal_components[:, np.newaxis] - centroid_array, axis=2)
closest_centroid_indices = np.argmin(distances, axis=1)
classification_image = closest_centroid_indices + 1

# Reshape and visualize
classification_image_2d = classification_image.reshape(height, width)

# Use the colors from property_desc to correctly map colors
color_dict = annotations_data['property_desc']['type']['colors']
class_colors = [color_dict[name] for name in class_names]
final_colors = ['#000000'] + class_colors
final_map = plt.cm.colors.ListedColormap(final_colors)

plt.figure(figsize=(10, 8))
plt.imshow(classification_image_2d, cmap=final_map, interpolation='none')
plt.title("Image Classified by PCA Centroids")
plt.xlabel("Width")
plt.ylabel("Height")

cbar = plt.colorbar(ticks=range(1, len(class_map) + 1))
cbar.ax.set_yticklabels(class_names)

plt.show()

Python

Reading PAM datacubes

See PAM for more details on the file format itself.

import numpy as np

def read_pam_file(file_path, chan_rmv=None):
try:
# Open the .pam file for reading
with open(file_path, "rb") as file:
# Function to parse the PAM header until the "ENDHDR" marker
def parse_pam_header(file):
header = {}
while True:
line = file.readline().decode("utf-8").strip()
if not line:
break
if line == "ENDHDR":
break
parts = line.split(" ", 1)
if len(parts) == 2:
key, value = parts
header[key] = value
elif len(parts) == 1:
# Handle lines with no key (just a value)
header[line] = None
return header

# Parse the PAM header until "ENDHDR"
header = parse_pam_header(file)

# Get image properties from the header
width = int(header["WIDTH"]) # Actual width of image
channels = int(header["DEPTH"]) # Corresponds to the number of spectral channels
maxval = int(header["MAXVAL"])
height = int(header["HEIGHT"]) # Corresponds to the number of scan lines
interleave = header.get("TUPLTYPE")

# Read the binary data with the number of channels determined by depth
data = np.fromfile(file, dtype=np.uint8, count=width * height * channels)

# image_data = data.reshape(width, height, channels, order='F') # Fortran-style
if interleave == "BIL":
image_data = data.reshape(height, width, channels, order='C') # C-style
else: #BSQ
image_data = data.reshape(channels, height, width, order='C') # C-style
print("Shape of cube: ", image_data.shape)

if chan_rmv != None:
image_data = image_data[chan_rmv:-chan_rmv]
return image_data

except FileNotFoundError:
print(f"File not found: {file_path}")
return None
except Exception as e:
print(f"An error occurred: {e}")
return None
PAM interleave

The above example expects PAM files captured using the Buteo Hyperspectral Imaging System which have a BIL or BSQ interleave type.

Note that the PAM format native interleave is BIP.

Writing PNM files and PAM datacubes

See PAM for more details on the file format itself.

from datetime import datetime
import sys

# PNM/PAM
def needs_byteswap(frame):
# 8-bit
if frame.itemsize == 1:
return False
# Big Endian
if frame.dtype.byteorder == ">":
return False
# Big Endian native byte order
if frame.dtype.byteorder == "=" and sys.byteorder == "big":
return False
return True

def write_pnm(name, frame):
with open(name, "wb") as f:
height, width, channels = frame.shape

if channels == 1:
type = "P5"
else:
type = "P6"

if frame.nbytes == width * height * channels:
if frame.dtype == "uint8":
max_val = 255
else:
max_val = 65535
else:
max_val = 65535

pnm_header = f"{type} {width} {height} {max_val}\n"
f.write(bytearray(pnm_header, "ascii"))

if needs_byteswap(frame):
# swap data in memory in order to properly write PNM
# since PPM is Big Endian by definition
# Note that if the operation is done in place
# it will mess up the endianess when reading single values out
# so in this case use: frame = frame.byteswap(True).newbyteorder()
# in order to keep the correct byte order
frame.byteswap().tofile(f)
else:
frame.tofile(f)

# write PAM header for BIL interleave (Band-interleaved-by-line)
def write_pam_header(name, frame, gain, exposure, n_imgs):
with open(name, "wb") as f:
width, channels, height = frame.shape
if frame.nbytes == width * height * channels:
if frame.dtype == "uint8":
max_val = 255
else:
max_val = 65535
else:
max_val = 65535
height = n_imgs

# Get today's date and time
now = datetime.now()
date_str = now.strftime("%Y-%m-%d")
time_str = now.strftime("%H:%M:%S")

pam_header = (
f"P7\n"
f"#Date: {date_str}\n"
f"#Time: {time_str}\n"
f"#Gain: {gain}\n"
f"#Exposure: {exposure}\n"
f"WIDTH {width}\n"
f"HEIGHT {height}\n"
f"DEPTH {channels}\n"
f"MAXVAL {max_val}\n"
f"TUPLTYPE BIL\n"
f"ENDHDR\n"
)
f.write(bytearray(pam_header, "ascii"))

def append_pam_image(name, frame):
with open(name, "ab") as f:
if needs_byteswap(frame):
# Swap data in memory in order to properly write PAM
# since PAM is Big Endian by definition
# Note that if the operation is done in place
# it will mess up the endianess when reading single values out
# so in this case use: frame = frame.byteswap(True).newbyteorder()
# in order to keep the correct byte order
frame.byteswap().tofile(f)
else:
frame.tofile(f)
PAM interleave

The above example generates PAM which have a BIL interleave type.

Note that the PAM format native interleave is BIP.

Reflectance transformation

def refl_trans(fl, white_avg, dark_avg, clip=False, nan_replacement=None): #  
"""Performs reflectance transformation on a hyperspectral image cube"""

# Return reflectance cube
cube = read_pam_file(fl)
cube = (cube - dark_avg) / (white_avg - dark_avg)

# <<The transpose gives the correct orientation of the real image with format of (H, W, C)
# assuming the Buteo conveyor belt moves toward the right>>
cube = np.transpose(cube, axes = (2, 1, 0))[::-1,::-1,:]

# handle values out of range
if clip:
cube = cube.clip(0.0, 1.0)

# handle NaN because of possible division by zero on (white - dark)
if nan_replacement is not None:
cube = cube.nan_to_num(nan_replacement)

return cube

Capturing datacubes

info
  • Requires qamlib and should therefore be ran inside the camera (over ssh or with a screen and keyboard).
  • This examples assumes a belt running at a constant speed
  • This example generates datacubes with BIL interleave type
  • The datacube files are saved to /tmp (RAM) and will therefore be gone after a reboot
    • copy them over to a different machine using the scp command or WinSCP
      • from the host PC: scp root@<cam_ip>:/tmp/<file_name> <destination>
        • for example: scp root@10.100.10.100:/tmp/_HSI_* .
    • Note that saving directly to the camera CFAST (for permanent storage) might be too slow (slow IO operations)
      • consider connecting a SSD to the camera via the USB ports if fast permanent storage is necessary
    • Also note that the camera has 4GB of RAM, therefore there will be a limit to how many images can be saved
      • For example the full datacube bellow (with 1000 images) takes 1.1GB of space
import qamlib
import time
import math
import numpy as np
from datetime import datetime
import sys


def current_datetime_filename():
return datetime.now().strftime("%Y-%m-%d_%H-%M-%S")

SAVE_PAM = True
SAVE_PNM = False
pam_filename = f"/tmp/_HSI_{current_datetime_filename()}.pam"

# Adjust these as necessary
N_IMGS = 1000
EXP_TIME = 900 #us
GAIN = 1.0 #x
LEFT = 0
WIDTH = 1296
FIRST_BAND = 430.0 #nm
LAST_BAND = 1700.0 #nm
MIN_FPS = 10
MM_PER_PX = 0.35 #mm/px
BELT_SPEED = 100 #mm/s
#PX_SIZE*HEIGHT/FOCAL_LENGTH = BELT_WIDTH/1280

# converts gain from dB to x
def dB2x(dB):
return math.pow(10, dB/20.0)

def x2dB(x):
return 20.0*math.log10(x)

# spectograph
START_WL = 430.0 #nm
END_WL = 1700.0 #nm
CALIB_HEIGHT = 920
CALIB_TOP = 20
NM2LINE = (END_WL - START_WL)/CALIB_HEIGHT
SLIT_SIZE = 20.0 #um
PX_SIZE = 5.0 #um

# calculate framerate based on belt_speed (in mm/s)
def calc_fps(belt_speed, force_aspect_ratio=True):
# 1 to 1 sampling
if not force_aspect_ratio:
return 1/((SLIT_SIZE/PX_SIZE)*MM_PER_PX/belt_speed)
# aspect ratio of the resulting image is equal in the two spatial directions
else:
return 1/(MM_PER_PX/belt_speed)

# Wavelength to line number conversion
def band2line(band):
return round(CALIB_TOP + (band - START_WL)/NM2LINE)

# Line number to wavelength conversion
def line2band(line):
return (line - CALIB_TOP)*NM2LINE + START_WL

TOP = band2line(FIRST_BAND)
BOTTOM = band2line(LAST_BAND)
HEIGHT = BOTTOM - TOP


# Program start
try:
#opens camera at '/dev/qtec/video0'
cam = qamlib.Camera()

#cam.set_format("Y16") #16-bit
#cam.set_control("Sensor Bit Mode", 2) #12-bit
cam.set_format("GREY")
cam.set_control("Exposure Time, Absolute", EXP_TIME)
cam.set_control("Gain", round(x2dB(GAIN)*1000)) #mili dB
# binning: (adjust crop if binning is used)
#cam.set_resolution(WIDTH, HEIGHT/2)
#cam.set_control("Vertical Binning", 2)
cam.set_crop(LEFT, TOP, WIDTH, HEIGHT)

fps_limits = cam.get_framerates()
min_fps = max(fps_limits.min, MIN_FPS)
fps = calc_fps(BELT_SPEED)
if fps > fps_limits.max:
print(f"Warning required fps ({fps}) is over max: {fps_limits.max}")
fps = fps_limits.max
if fps < min_fps:
print(f"Warning required fps ({fps}) is under min: {min_fps}")
fps = min_fps
cam.set_framerate(fps)

# Read camera controls/parameters
img_format = cam.get_format()
px_format = img_format.pixelformat
exposure = cam.get_control("Exposure Time, Absolute")
gain = cam.get_control("Gain")
h_bin = cam.get_control('Horizontal Binning')
v_bin = cam.get_control('Vertical Binning')

print(f"FPS: {cam.get_framerate()} [{fps_limits}]")
print(f"Frame Size: {cam.get_resolution()}")
print(f"Crop: {cam.get_crop()}")
print(f"Binning: {h_bin}x{v_bin}")
print(f"Pixel format: {img_format}")
print(f"Exposure: {exposure}")
print(f"Gain: {dB2x(gain/1000)}")
print("\n")

except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print(message)
exit(-1)

# Start and stop streaming with context manager
print("Starting frame capture")
try:
with cam:
if SAVE_PAM:
meta, frame = cam.get_frame(buffered=True)
write_pam_header(pam_filename, frame, gain, exposure, N_IMGS)

#while True:
for i in range(N_IMGS):
meta, frame = cam.get_frame(buffered=True)

#print(f"{meta.sequence=}")

# save datacube
if SAVE_PAM:
append_pam_image(pam_filename, frame)

# save individual images
if SAVE_PNM:
write_pnm(f"/tmp/_HSI_{i}.pnm", frame)

except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print(message)

print("Done with frame capture")

Encoder read-out

Read-out of the encoder input in order to adjust the framerate to the belt speed.

# encoder
TICKS_PER_ROUND = 2048
SHAFT_RAD = 25 #mm
MM_PER_TICK = SHAFT_RAD*(2*math.pi)/TICKS_PER_ROUND

# Adjust camera settings/controls as before
...

# Start and stop streaming with context manager
print("Starting frame capture")
try:
with cam:
ticks_sec = None
speed_mm_per_sec = None
last_time = None
last_pos = None
time_diff = 0
pos_diff = 0
acc_time = 0
acc_pos = 0

if SAVE_PAM:
meta, frame = cam.get_frame(buffered=True)
write_pam_header(pam_filename, frame, gain, exposure, N_IMGS)

#while True:
for i in range(N_IMGS):
meta, frame = cam.get_frame(buffered=True)
enc_pos = cam.get_ext_control('Encoder Position').value
now = time.time()

#print(f"{meta.sequence=}")

#encoder
if last_time:
time_diff = now - last_time
if last_pos:
pos_diff = enc_pos - last_pos
if pos_diff < 0:
pos_diff += np.iinfo(enc_pos.dtype).max
acc_pos += pos_diff
acc_time += time_diff

# calc speed once a second
if acc_time >= 1.0:
ticks_sec = acc_pos/acc_time
speed_mm_per_sec = ticks_sec * MM_PER_TICK
shaft_rad = (MM_PER_TICK * TICKS_PER_ROUND)/(2*3.14)
print(f"{ticks_sec=} {speed_mm_per_sec=} {shaft_rad=}")
acc_time = 0
acc_pos = 0

# adjust fps
if speed_mm_per_sec > 1:
fps = calc_fps(speed_mm_per_sec)
if fps > fps_limits.max:
print(f"Warning required fps ({fps}) is over max: {fps_limits.max}")
fps = fps_limits.max
if fps < min_fps:
print(f"Warning required fps ({fps}) is under min: {min_fps}")
fps = min_fps
cam.set_framerate(fps)
print(f"New FPS: {cam.get_framerate()} [{fps_limits}]")

last_time = now
last_pos = enc_pos

#print(f"{enc_pos=} {pos_diff=} {time_diff=} {ticks_sec=} {speed_mm_per_sec=}")

# save datacube
if SAVE_PAM:
append_pam_image(pam_filename, frame)

# save individual images
if SAVE_PNM:
write_pnm(f"/tmp/_HSI_{i}.pnm", frame)

except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print(message)

print("Done with frame capture")

Matlab

Reading PAM datacubes

  • msp_HSI_buteo.m
function cube = msp_HSI_buteo(file_path)
% msp_HSI_buteo - Reads and reshapes a hyperspectral image cube from a .pam file.
%
% This function reads a SWIR hyperspectral image stored in a .pam file and
% reshapes it into a 3D cube format.
%
% Author: msp
% Date: February 2025
%
% Input:
% file_path - String specifying the path to the .pam file.
%
% Output:
% cube - A 3D hyperspectral image cube (height x width x bands).
%
% Note:
% - The function assumes a specific file format where the first 11 lines
% form a header that is ignored.
% - The reshaping assumes a fixed image size of 1296 (height) × variable width × 900 (bands).
%

% Open file for reading
fileID = fopen(file_path, 'r');
if fileID == -1
error('Error: Could not open file %s', file_path);
end

% Read and store the first 11 lines as header
header = [];
for i = 1:11
header = [header, fgets(fileID)]; % Read header, one line at a time
end
byte_length = numel(header); % Byte length of header (not used)

% Read the rest of the .pam file (excluding header)
data = uint8(fread(fileID)); % Read as unsigned 8-bit integer

% Close the file
fclose(fileID);

% Reshape data into a 3D hyperspectral cube
cube = reshape(data, 1296, [], 900); % SWIR Buteo assumed dimensions

% Clear unnecessary variable
clear data;
end
PAM interleave

The above example expects PAM files captured using the Buteo Hyperspectral Imaging System which have a BSQ interleave type.

Note that the PAM format native interleave is BIP.

Reflectance transformation

  • msp_refl_trans.m
function cube_refl = msp_refl_trans(cube, mean_white, mean_dark)
% msp_refl_trans - Performs reflectance transformation on a hyperspectral image cube.
%
% This function converts raw hyperspectral intensity values into reflectance values
% using a white reference and a dark reference.
%
% Author: msp
% Date: February 2025
%
% Input:
% cube - A 3D hyperspectral image cube (height x width x bands).
% mean_white - A reference white image (same dimensions as cube or a single spectrum).
% mean_dark - A reference dark image (same dimensions as cube or a single spectrum).
%
% Output:
% cube_refl - The hyperspectral cube transformed to reflectance values.

% Compute reflectance transformation
cube_refl = single(cube - uint8(mean_dark)) ./ single(uint8(mean_white - mean_dark));

% Handle NaN and Inf values (caused by division by zero)
cube_refl(isnan(cube_refl) | isinf(cube_refl)) = 0;
cube_refl = single(cube_refl);
end