This vignette collects small, task‑oriented examples (“cookbook”
style) for common NeuroVec workflows that don’t warrant a
full standalone vignette.
We assume basic familiarity with NeuroVec from
vignette("NeuroVector").
Sometimes you want to take a 4D time‑series (NeuroVec)
and collapse the time dimension into a single 3D volume by applying a
reduction function to each voxel’s time‑series (e.g., temporal mean,
standard deviation, or any custom summary).
The pattern is:
NeuroVec to a voxel × time matrix with
as.matrix().NeuroVol.file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name) # 4D NeuroVec
reduce_ts_to_vol <- function(x, FUN) {
dm <- dim(x)
stopifnot(length(dm) == 4)
mat <- as.matrix(x) # voxels × time
vals <- apply(mat, 1, FUN) # one value per voxel
NeuroVol(array(vals, dm[1:3]), drop_dim(space(x)))
}
# Example: temporal mean volume
mean_vol <- reduce_ts_to_vol(vec, mean)
mean_vol
#>
#> === NeuroVol Object ===
#>
#> * Basic Information
#> Type: DenseNeuroVol
#> Dimensions: 64 x 64 x 25 (806.6 Kb)
#> Total Voxels: 102,400
#>
#> * Data Properties
#> Value Range: [0.00, 1.00]
#>
#> * Spatial Properties
#> Spacing: 3.50 x 3.50 x 3.70 mm
#> Origin: 112.0, -108.0, -46.2 mm
#> Axes: Right-to-Left x Posterior-to-Anterior x Inferior-to-Superior
#>
#> ======================================
#>
#> Access Methods:
#> . Get Slice: slice(object, zlevel=10)
#> . Get Value: object[i, j, k]
#> . Plot: plot(object) # shows multiple slicesYou can plug in any summary function that maps a numeric vector to a
single value (e.g. median, a trimmed mean, robust
summaries, etc.).
If you have a single concatenated NeuroVec with multiple
logical blocks along the time dimension (e.g., runs or sessions), you
can “un‑concatenate” it into one NeuroVec per block using
split_blocks().
Create an index vector of length dim(vec)[4] indicating
which block each timepoint belongs to, then split the time index and
convert each subset to a sub-vector (this is what does internally):
space4 <- NeuroSpace(c(10, 10, 10, 9), c(1, 1, 1))
vec4d <- NeuroVec(array(rnorm(10 * 10 * 10 * 9),
dim = c(10, 10, 10, 9)),
space4)
# Suppose timepoints 1–3 belong to block 1, 4–6 to block 2, 7–9 to block 3
block_idx <- c(1, 1, 1,
2, 2, 2,
3, 3, 3)
idx_list <- split(seq_len(dim(vec4d)[4]), block_idx)
blocks <- lapply(idx_list, function(ii) sub_vector(vec4d, ii))
length(blocks) # 3 blocks
#> [1] 3
dim(blocks[[1]]) # first block: 10×10×10×3
#> [1] 10 10 10 3
dim(blocks[[2]]) # second block: 10×10×10×3
#> [1] 10 10 10 3
dim(blocks[[3]]) # third block: 10×10×10×3
#> [1] 10 10 10 3The result is a list of NeuroVec objects; access
elements with blocks[[i]] for block i. This is
effectively the inverse of concatenation by time. The helper exposes
this pattern with method dispatch for NeuroVec.
For large 4D datasets or shared‑memory workflows on HPC, it is often
useful to back a NeuroVec by a memory‑mapped file using
MappedNeuroVec. The as_mmap() helper converts
common vector types (NeuroVec, SparseNeuroVec,
FileBackedNeuroVec) into a MappedNeuroVec,
writing an uncompressed NIfTI file if needed.
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name) # DenseNeuroVec in memory
# Convert to a memory-mapped representation backed by a temporary .nii file
mvec <- as_mmap(vec)
mvec
#>
#> MappedNeuroVec Object
#> ====================
#>
#> Dimensions:
#> . Spatial: 64 x 64 x 25
#> . Temporal: 4
#>
#> Memory Mapping:
#> . Offset: 88 elements
#>
#> Space Information:
#> . Spacing: 3.5 x 3.5 x 3.7
#> . Origin: 112 x -108 x -46.2
#>
#> Label:
#> . file894716ba0f85.nii
# Or explicitly choose an output file (must be uncompressed for mmap)
tmp_nii <- tempfile(fileext = ".nii")
mvec2 <- as_mmap(vec, file = tmp_nii, overwrite = TRUE)
inherits(mvec2, "MappedNeuroVec")
#> [1] TRUEFor sparse data (SparseNeuroVec), as_mmap()
first densifies the vector and then writes a full 4D NIfTI before
mapping it, trading memory once for much more efficient subsequent
access and multi‑process sharing.
mapfTo apply a spatial kernel (e.g. a 3×3×3 mean filter) over a 3D
volume, use mapf() with a Kernel object. This
keeps the familiar NeuroVol/NeuroSpace
metadata while doing neighborhood computations.
bspace <- NeuroSpace(c(10, 10, 10), c(1, 1, 1))
vol <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), bspace)
# Simple 3×3×3 mean kernel
kern <- Kernel(c(3, 3, 3), vdim = c(3, 3, 3))
smoothed_vol <- mapf(vol, kern)
smoothed_vol
#>
#> === NeuroVol Object ===
#>
#> * Basic Information
#> Type: DenseNeuroVol
#> Dimensions: 10 x 10 x 10 (14.4 Kb)
#> Total Voxels: 1,000
#>
#> * Data Properties
#> Value Range: [-2.21, 2.67]
#>
#> * Spatial Properties
#> Spacing: 1.00 x 1.00 x 1.00 mm
#> Origin: 0.0, 0.0, 0.0 mm
#> Axes: Left-to-Right x Posterior-to-Anterior x Inferior-to-Superior
#>
#> ======================================
#>
#> Access Methods:
#> . Get Slice: slice(object, zlevel=10)
#> . Get Value: object[i, j, k]
#> . Plot: plot(object) # shows multiple slicesYou can also pass a logical mask to restrict computation to a region while keeping the output volume in the original space.
Given voxelwise cluster labels (for example, from a parcellation),
split_clusters() can turn a NeuroVec into a
list of ROIVec objects—one per cluster—each containing the
time‑series of voxels in that cluster.
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name)
# Fake cluster labels over the full 3D grid
n_vox <- prod(dim(vec)[1:3])
cl_lab <- sample(1:5, n_vox, replace = TRUE)
roi_list <- split_clusters(vec, cl_lab)
length(roi_list) # number of non-empty clusters
#> [1] 5
roi_list[[1]] # ROIVec for cluster "1"
#>
#> === ROIVec Object ===
#>
#> - Structure
#> Points: 20,592
#> Features: 4 (82,368 total)
#> Memory: 891.8 Kb
#>
#> - Spatial Properties
#> Parent Space: 64 x 64 x 25 x 4
#> Centroid: [32.6, 32.7, 13.0 mm]
#>
#> - Value Properties
#> Range: [0.00, 1.00]
#>
#> ======================================
#>
#> Access Methods:
#> . Get Points: coords(object)
#> . Get Values: as.matrix(object)
#> . Subset: object[1:10, ]
coords(roi_list[[1]])[1:5, ] # first few voxel coordinates
#> [,1] [,2] [,3]
#> [1,] 3 1 1
#> [2,] 10 1 1
#> [3,] 12 1 1
#> [4,] 15 1 1
#> [5,] 18 1 1
dim(values(roi_list[[1]])) # time × voxels in that cluster
#> [1] 4 20592This is useful when you want to work directly with voxel‑level ROIs
per cluster rather than first building a
ClusteredNeuroVec.
split_reduceWhen you have a voxel‑wise grouping (e.g. tissue classes, parcels, or
custom regions), split_reduce() can compute one summary
time‑series per group directly from a NeuroVec.
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
vec <- read_vec(file_name)
n_vox <- prod(dim(vec)[1:3])
# Assign each voxel to one of three arbitrary groups
fac <- factor(sample(1:3, n_vox, replace = TRUE))
# Default: mean over voxels in each group (per timepoint)
group_ts <- split_reduce(vec, fac)
dim(group_ts) # groups × timepoints
#> [1] 3 4
rownames(group_ts) # "1", "2", "3"
#> [1] "1" "2" "3"Here each row of group_ts is the mean time‑series of all
voxels in that group; you can supply any function FUN that
maps a numeric vector to a scalar (e.g. median, robust
summaries, etc.).
To build a 4D NeuroVec from multiple 3D
NeuroVol objects that share the same space, use
concat() along the time dimension.
sp3 <- NeuroSpace(c(10, 10, 10), c(1, 1, 1))
vol1 <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), sp3)
vol2 <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), sp3)
vol3 <- NeuroVol(array(rnorm(10 * 10 * 10), c(10, 10, 10)), sp3)
# Concatenate volumes into a 4D NeuroVec (time dimension length 3)
vec_3 <- concat(vol1, vol2, vol3)
dim(vec_3) # 10 × 10 × 10 × 3
#> [1] 10 10 10 3
space(vec_3) # inherits spatial metadata from inputs
#>
#> NeuroSpace Object
#>
#> >> Dimensions
#> Grid Size: 10 x 10 x 10 x 3
#> Memory: 5.9 KB
#>
#> >> Spatial Properties
#> Spacing: 1.00 x 1.00 x 1.00 mm
#> Origin: 0.00 x 0.00 x 0.00 mm
#>
#> >> Anatomical Orientation
#> X: Left-to-Right | Y: Posterior-to-Anterior | Z: Inferior-to-Superior
#>
#> >> World Transformation
#> Forward (Voxel to World):
#> 1.000 0.000 0.000 0.000
#> 0.000 1.000 0.000 0.000
#> 0.000 0.000 1.000 0.000
#> 0.000 0.000 0.000 1.000
#> Inverse (World to Voxel):
#> 1.000 0.000 0.000 0.000
#> 0.000 1.000 0.000 0.000
#> 0.000 0.000 1.000 0.000
#> 0.000 0.000 0.000 1.000
#>
#> >> Bounding Box
#> Min Corner: 0.0, 0.0, 0.0 mm
#> Max Corner: 9.0, 9.0, 9.0 mm
#>
#> ==================================================All input volumes must have identical spatial dimensions and
NeuroSpace; otherwise concat() will error.
You can also concatenate multiple NeuroVec objects
(e.g. runs or sessions) into a longer 4D vector, again with
concat().
file_name <- system.file("extdata", "global_mask_v4.nii", package = "neuroim2")
run1 <- read_vec(file_name) # 4D NeuroVec
run2 <- read_vec(file_name) # same space and shape
# Concatenate timepoints: result has dim(...)[4] = dim(run1)[4] + dim(run2)[4]
run12 <- concat(run1, run2)
dim(run1)
#> [1] 64 64 25 4
dim(run2)
#> [1] 64 64 25 4
dim(run12)
#> [1] 64 64 25 8This is the natural inverse of splitting by blocks: you can break a
long NeuroVec into segments with
split_blocks() and then reconstruct a longer series again
with concat().
To identify spatially contiguous clusters in a 3D volume or mask, use
conn_comp() on a NeuroVol. This is handy for
summarizing thresholded statistical maps or cleaning up binary
masks.
sp <- NeuroSpace(c(10, 10, 10), c(1, 1, 1))
arr <- array(0, c(10, 10, 10))
# Two small 2×2×2 clusters in opposite corners
arr[1:2, 1:2, 1:2] <- 1
arr[8:9, 8:9, 8:9] <- 1
vol <- NeuroVol(arr, sp)
# Find connected components above threshold 0 (26-connectivity by default)
cc <- conn_comp(vol, threshold = 0)
max(cc$index) # number of clusters (should be 2)
#> [1] 2
cc$size[cc$size > 0] # cluster sizes in voxels
#> [1] 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8You can also request a cluster_table of summary
statistics or local_maxima for peak finding: