Noodle
Loading...
Searching...
No Matches
Public Functions

Files

file  noodle.h
 CNN/ML primitives for tiny MCUs with pluggable filesystem backends.
 

Classes

struct  Conv
 File-backed convolution parameters. More...
 
struct  ConvMem
 Memory-backed convolution parameters. More...
 
struct  Pool
 2D pooling parameters. Use M = 1 and T = 1 for identity (no pooling). More...
 
struct  FCN
 
struct  FCNFile
 
struct  FCNMem
 

Typedefs

typedef void(* CBFPtr) (float progress)
 Progress callback type used by long-running routines.
 

Common parameter semantics

Standard meanings shared by many functions.

Parameters
WInput spatial width (2D) or length (1D).
KKernel size (2D: KxK, 1D: K).
SStride.
PZero-padding (per side). 2D uses top/left padding of size P.
MPool kernel size (2D: MxM).
TPool stride.
n_inputsNumber of input channels/features.
n_outputsNumber of output channels/features.
in_fnBase input filename template (see File naming convention).
out_fnBase output filename template.
weight_fnWeight filename template; receives both I and O indices.
bias_fnBias filename (one bias per output channel, scalar per line).
with_reluIf true, apply ReLU after bias.
void noodle_setup_temp_buffers (void *b1, void *b2)
 Provide two reusable temporary buffers used internally by file-streaming operations.Must be called before conv/FCN variants that read from files. Two temp buffers are needed for operations that read from a file. For C*W*W tensor, the buffer should be W*W.
 
void noodle_setup_temp_buffers (void *b2)
 Provide a single reusable temporary buffer used internally by file-streaming ops.Must be called before conv/FCN variants that read from files. One temp buffer is needed for operations that reads from a variable. Hence, only output accumulator buffe is needed. For C*W*W tensor, the buffer should be W*W.
 

File and File-System Utilities

bool noodle_fs_init (uint8_t clk_pin, uint8_t cmd_pin, uint8_t d0_pin)
 Initialize SD/FS backend (pins variant is meaningful only for SD_MMC).
 
bool noodle_fs_init (uint8_t clk_pin, uint8_t cmd_pin, uint8_t d0_pin, uint8_t d1_pin, uint8_t d2_pin, uint8_t d3_pin)
 Initialize SD/FS backend with default pins/settings.
 
bool noodle_fs_init ()
 
bool noodle_fs_init (uint8_t cs_pin)
 Initialize SD/FS backend with a specific CS_PIN.
 
void noodle_read_top_line (const char *fn, char *line, size_t maxlen)
 Read the first line of a given text file.
 
size_t noodle_read_bytes_until (NDL_File &file, char terminator, char *buffer, size_t length)
 Read bytes from a file until a terminator or length-1 (NULL terminated).
 
void noodle_delete_file (const char *fn)
 Delete a file if it exists.
 

Memory utilities

float * noodle_create_buffer (uint16_t size)
 Allocate a raw float buffer of size bytes.
 
void noodle_delete_buffer (float *buffer)
 Free a buffer allocated by noodle_create_buffer.
 
void noodle_array_to_file (float *array, const char *fn, uint16_t n)
 Write an array of n floats to fn, one value per line. File will be opened and closed.
 
void noodle_grid_to_file (byte *grid, const char *fn, uint16_t n)
 Write an n byte grid to fn as bytes, row-major. File will be opened and closed.
 
void noodle_grid_to_file (float *grid, const char *fn, uint16_t n)
 Write an n float grid to fn, row-major.
 
void noodle_array_from_file (const char *fn, float *buffer, uint16_t K)
 Read a float array of length K from fn (one value per line).
 
void noodle_array_from_file (NDL_File &fi, float *buffer, uint16_t K)
 Read a float array of length K from an opened file handler fi (one value per line).
 
void noodle_grid_from_file (const char *fn, byte *buffer, uint16_t K)
 
void noodle_grid_from_file (NDL_File &fi, byte *buffer, uint16_t K)
 Read an K × K grid (stored as byte) from fi (opened file handler) into buffer.
 
void noodle_grid_from_file (const char *fn, int8_t *buffer, uint16_t K)
 Read an K × K grid (stored as float) from fn into buffer.
 
void noodle_grid_from_file (NDL_File &fi, int8_t *buffer, uint16_t K)
 Read an K × K grid (stored as int8) from fi (opened file handler) into buffer.
 
void noodle_grid_from_file (const char *fn, float *buffer, uint16_t K)
 Read an K × K grid (stored as float) from fn into buffer.
 
void noodle_reset_buffer (float *buffer, uint16_t n)
 Fill buffer with zeros (n floats).
 
float * noodle_slice (float *flat, size_t W, size_t z)
 
void noodle_array_to_file (float *array, NDL_File &fo, uint16_t n)
 Write an array of n floats to fo (an opened file handler), one value per line. No file open and close operations.
 
void noodle_grid_to_file (byte *grid, NDL_File &fo, uint16_t n)
 Write an n byte grid to fo (opened file handler) as bytes, row-major. No file open and close operations.
 
void noodle_grid_to_file (float *grid, NDL_File &fo, uint16_t n)
 Write an n float grid to fo (an opened file handler), row-major.
 
void noodle_grid_from_file (NDL_File &fi, float *buffer, uint16_t K)
 

2D Convolution

Packed-file conventions:

  • Input file: packed CHW planes (each plane W×W).
  • Output file: packed CHW planes (each plane V_out×V_out) in output-channel order.

Padding:

  • Noodle uses symmetric, stride-independent padding.
  • If P >= 0, that value is used as the symmetric padding on all sides.
  • If P == 65535, padding is computed automatically as: P = floor((K - 1) / 2) which preserves spatial size when S = 1 and K is odd.

Requires temporary buffers set via noodle_setup_temp_buffers. Buffer sizes are typically W×W floats, used as per-channel scratch space.

uint16_t noodle_conv_byte (const char *in_fn, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 File→File 2D conv with BYTE input feature maps.
 
uint16_t noodle_conv_float (const char *in_fn, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 File→File 2D conv with FLOAT input feature maps.
 
uint16_t noodle_conv_float (const char *in_fn, uint16_t n_inputs, uint16_t n_outputs, float *output, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 File→Memory 2D conv with FLOAT inputs; writes [O, Wo, Wo] tensor to output.
 
uint16_t noodle_conv_float (float *input, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 Memory→File 2D conv with FLOAT inputs and in-file conv parameters.
 
uint16_t noodle_conv_float (float *input, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, uint16_t W, const ConvMem &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 Memory→File 2D conv with FLOAT inputs and in-varibale conv parameters.
 
uint16_t noodle_conv_float (float *input, uint16_t n_inputs, uint16_t n_outputs, float *output, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 Memory→Memory 2D conv with FLOAT inputs and in-file conv parameters.
 
uint16_t noodle_conv_float (float *input, uint16_t n_inputs, uint16_t n_outputs, float *output, uint16_t W, const ConvMem &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 

1D Convolution

  • Conv.K used as kernel length
  • W used as input length
uint16_t noodle_conv1d (const char *in_fn, uint16_t n_inputs, const char *out_fn, uint16_t n_outputs, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb=NULL)
 
uint16_t noodle_conv1d (const char *in_fn, uint16_t n_inputs, const char *out_fn, uint16_t n_outputs, uint16_t W, const Conv &conv, CBFPtr progress_cb=NULL)
 
uint16_t noodle_conv1d (float *in, uint16_t n_inputs, float *out, uint16_t n_outputs, uint16_t W, const ConvMem &conv, CBFPtr progress_cb=NULL)
 
uint16_t noodle_conv1d (const char *in_fn, uint16_t n_inputs, const char *out_fn, uint16_t n_outputs, uint16_t W, const ConvMem &conv, CBFPtr progress_cb)
 
uint16_t noodle_conv1d (float *in, uint16_t n_inputs, const char *out_fn, uint16_t n_outputs, uint16_t W, const ConvMem &conv, CBFPtr progress_cb)
 
uint16_t noodle_conv1d (const char *in_fn, uint16_t n_inputs, float *out, uint16_t n_outputs, uint16_t W, const ConvMem &conv, CBFPtr progress_cb)
 

Activations

uint16_t noodle_soft_max (float *input_output, uint16_t n)
 
uint16_t noodle_sigmoid (float *input_output, uint16_t n)
 
uint16_t noodle_relu (float *input_output, uint16_t n)
 

Fully Connected Network

uint16_t noodle_fcn (const int8_t *input, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, const FCNFile &fcn, CBFPtr progress_cb=NULL)
 
uint16_t noodle_fcn (const char *in_fn, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, const FCNFile &fcn, CBFPtr progress_cb=NULL)
 
uint16_t noodle_fcn (const float *input, uint16_t n_inputs, uint16_t n_outputs, float *output, const FCNMem &fcn, CBFPtr progress_cb=NULL)
 
uint16_t noodle_fcn (const byte *input, uint16_t n_inputs, uint16_t n_outputs, float *output, const FCNFile &fcn, CBFPtr progress_cb=NULL)
 
uint16_t noodle_fcn (const int8_t *input, uint16_t n_inputs, uint16_t n_outputs, float *output, const FCNFile &fcn, CBFPtr progress_cb=NULL)
 
uint16_t noodle_fcn (const char *in_fn, uint16_t n_inputs, uint16_t n_outputs, float *output, const FCNFile &fcn, CBFPtr progress_cb=NULL)
 
uint16_t noodle_fcn (const float *input, uint16_t n_inputs, uint16_t n_outputs, float *output, const FCNFile &fcn, CBFPtr progress_cb)
 
uint16_t noodle_fcn (const float *input, uint16_t n_inputs, uint16_t n_outputs, const char *out_fn, const FCNFile &fcn, CBFPtr progress_cb)
 

Tensor Reshaping

uint16_t noodle_flat (const char *in_fn, float *output, uint16_t V, uint16_t n_filters)
 
uint16_t noodle_flat (float *input, float *output, uint16_t V, uint16_t n_filters)
 
uint16_t noodle_gap (float *inout, uint16_t C, uint16_t W)
 
void noodle_find_max (float *input, uint16_t n, float &max_val, uint16_t &max_idx)
 

2D Depth-wise Convolution

uint16_t noodle_dwconv_float (const char *in_fn, uint16_t n_channels, const char *out_fn, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb)
 
uint16_t noodle_dwconv_float (float *input, uint16_t n_channels, float *output, uint16_t W, const Conv &conv, const Pool &pool, CBFPtr progress_cb)
 
uint16_t noodle_dwconv_float (float *input, uint16_t n_channels, float *output, uint16_t W, const ConvMem &conv, const Pool &pool, CBFPtr progress_cb)
 
void noodle_unpack_bn_params (const float *bn_params, uint16_t C, const float **gamma, const float **beta, const float **mean, const float **var)
 

Batch Normalization

uint16_t noodle_bn (float *x, uint16_t C, uint16_t W, const float *gamma, const float *beta, const float *mean, const float *var, float eps=1e-3)
 
uint16_t noodle_bn (float *x, uint16_t C, uint16_t W, const float *bn_params, float eps=1e-3)
 
uint16_t noodle_bn_relu (float *x, uint16_t C, uint16_t W, const float *gamma, const float *beta, const float *mean, const float *var, float eps=1e-3)
 
uint16_t noodle_bn_relu (float *x, uint16_t C, uint16_t W, const float *bn_params, float eps=1e-3)
 

Detailed Description

Public functions, types, and configuration intended for application use.

Typedef Documentation

◆ CBFPtr

typedef void(* CBFPtr) (float progress)

Progress callback type used by long-running routines.

Parameters
progressA normalized progress in [0,1], monotonically nondecreasing.

Function Documentation

◆ noodle_array_from_file() [1/2]

void noodle_array_from_file ( const char *  fn,
float *  buffer,
uint16_t  K 
)

Read a float array of length K from fn (one value per line).

◆ noodle_array_from_file() [2/2]

void noodle_array_from_file ( NDL_File &  fi,
float *  buffer,
uint16_t  K 
)

Read a float array of length K from an opened file handler fi (one value per line).

◆ noodle_array_to_file() [1/2]

void noodle_array_to_file ( float *  array,
const char *  fn,
uint16_t  n 
)

Write an array of n floats to fn, one value per line. File will be opened and closed.

◆ noodle_array_to_file() [2/2]

void noodle_array_to_file ( float *  array,
NDL_File &  fo,
uint16_t  n 
)

Write an array of n floats to fo (an opened file handler), one value per line. No file open and close operations.

◆ noodle_bn() [1/2]

uint16_t noodle_bn ( float *  x,
uint16_t  C,
uint16_t  W,
const float *  bn_params,
float  eps = 1e-3 
)

Batch Normalization for a channel-first tensor in memory.

Parameters
xPointer to the input tensor in [C][W][W] layout.
CNumber of channels.
WWidth/height of each channel plane.
bn_paramsPointer to the packed batch normalization parameters.
epsSmall constant to avoid division by zero.

◆ noodle_bn() [2/2]

uint16_t noodle_bn ( float *  x,
uint16_t  C,
uint16_t  W,
const float *  gamma,
const float *  beta,
const float *  mean,
const float *  var,
float  eps = 1e-3 
)

Batch Normalization for a channel-first tensor in memory.

Parameters
xPointer to the input tensor in [C][W][W] layout.
CNumber of channels.
WWidth/height of each channel plane.
gammaPointer to the per-channel scale parameters.
betaPointer to the per-channel shift parameters.
meanPointer to the per-channel mean parameters.
varPointer to the per-channel variance parameters.
epsSmall constant to avoid division by zero.

◆ noodle_bn_relu() [1/2]

uint16_t noodle_bn_relu ( float *  x,
uint16_t  C,
uint16_t  W,
const float *  bn_params,
float  eps = 1e-3 
)

◆ noodle_bn_relu() [2/2]

uint16_t noodle_bn_relu ( float *  x,
uint16_t  C,
uint16_t  W,
const float *  gamma,
const float *  beta,
const float *  mean,
const float *  var,
float  eps = 1e-3 
)

Batch Normalization followed by ReLU for a channel-first tensor in memory.

Parameters
xPointer to the input tensor in [C][W][W] layout.
CNumber of channels.
WWidth/height of each channel plane.
gammaPointer to the per-channel scale parameters.
betaPointer to the per-channel shift parameters.
meanPointer to the per-channel mean parameters.
varPointer to the per-channel variance parameters.
epsSmall constant to avoid division by zero.

◆ noodle_conv1d() [1/6]

uint16_t noodle_conv1d ( const char *  in_fn,
uint16_t  n_inputs,
const char *  out_fn,
uint16_t  n_outputs,
uint16_t  W,
const Conv conv,
CBFPtr  progress_cb = NULL 
)

File CHW→File CHW 1D convolution with bias+activation and NO pooling stage.

Semantics as above but appends raw conv+bias(+ReLU) sequences for each output channel to out_fn.

Parameters
in_fnPacked input filename (CHW).
n_inputsNumber of input channels I.
out_fnPacked output filename (CHW).
n_outputsNumber of output channels O.
WInput length.
convConvolution parameters (K, P, S, weight_fn, bias_fn, act).
progress_cbOptional progress callback in [0,1].
Returns
V (pre-pooling output length).

◆ noodle_conv1d() [2/6]

uint16_t noodle_conv1d ( const char *  in_fn,
uint16_t  n_inputs,
const char *  out_fn,
uint16_t  n_outputs,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

File CHW→File CHW 1D convolution with optional bias+activation and a pooling stage.

This follows the same I/O convention as noodle_conv_float():

  • in_fn is a single packed input file containing all input channels in CHW order (for 1D: C then W samples, one channel after another).
  • out_fn is a single packed output file; for each output channel O we append either the pooled sequence or the raw sequence (depending on overload).
  • Weights are read sequentially from conv.weight_fn in the order: for O in [0..n_outputs) and I in [0..n_inputs), read K floats (kernel taps).
  • Biases are read sequentially from conv.bias_fn (one float per output channel).
Parameters
in_fnPacked input filename (CHW).
n_inputsNumber of input channels I.
out_fnPacked output filename (CHW).
n_outputsNumber of output channels O.
WInput length.
convConvolution parameters (K, P, S, weight_fn, bias_fn, act).
poolPool parameters (kernel M, stride T).
progress_cbOptional progress callback in [0,1].
Returns
V_out after pooling.

◆ noodle_conv1d() [3/6]

uint16_t noodle_conv1d ( const char *  in_fn,
uint16_t  n_inputs,
const char *  out_fn,
uint16_t  n_outputs,
uint16_t  W,
const ConvMem conv,
CBFPtr  progress_cb 
)

◆ noodle_conv1d() [4/6]

uint16_t noodle_conv1d ( const char *  in_fn,
uint16_t  n_inputs,
float *  out,
uint16_t  n_outputs,
uint16_t  W,
const ConvMem conv,
CBFPtr  progress_cb 
)

◆ noodle_conv1d() [5/6]

uint16_t noodle_conv1d ( float *  in,
uint16_t  n_inputs,
const char *  out_fn,
uint16_t  n_outputs,
uint16_t  W,
const ConvMem conv,
CBFPtr  progress_cb 
)

◆ noodle_conv1d() [6/6]

uint16_t noodle_conv1d ( float *  in,
uint16_t  n_inputs,
float *  out,
uint16_t  n_outputs,
uint16_t  W,
const ConvMem conv,
CBFPtr  progress_cb = NULL 
)

Memory→Memory 1D convolution with optional bias+activation and NO pooling stage. This operation does NOT need temp buffers!

Parameters
inInput array (CHW).
n_inputsNumber of input channels I.
outOutput array (CHW).
n_outputsNumber of output channels O.
WInput length.
convConvolution parameters (K, P, S, weight_fn, bias_fn, act).
progress_cbOptional progress callback in [0,1].
Returns
V (pre-pooling output length).

◆ noodle_conv_byte()

uint16_t noodle_conv_byte ( const char *  in_fn,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

File→File 2D conv with BYTE input feature maps.

◆ noodle_conv_float() [1/6]

uint16_t noodle_conv_float ( const char *  in_fn,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

File→File 2D conv with FLOAT input feature maps.

◆ noodle_conv_float() [2/6]

uint16_t noodle_conv_float ( const char *  in_fn,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

File→Memory 2D conv with FLOAT inputs; writes [O, Wo, Wo] tensor to output.

◆ noodle_conv_float() [3/6]

uint16_t noodle_conv_float ( float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

Memory→File 2D conv with FLOAT inputs and in-file conv parameters.

◆ noodle_conv_float() [4/6]

uint16_t noodle_conv_float ( float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
uint16_t  W,
const ConvMem conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

Memory→File 2D conv with FLOAT inputs and in-varibale conv parameters.

◆ noodle_conv_float() [5/6]

uint16_t noodle_conv_float ( float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

Memory→Memory 2D conv with FLOAT inputs and in-file conv parameters.

◆ noodle_conv_float() [6/6]

uint16_t noodle_conv_float ( float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
uint16_t  W,
const ConvMem conv,
const Pool pool,
CBFPtr  progress_cb = NULL 
)

Memory→Memory 2D conv with FLOAT inputs and in-variable conv parameters.

◆ noodle_create_buffer()

float * noodle_create_buffer ( uint16_t  size)

Allocate a raw float buffer of size bytes.

◆ noodle_delete_buffer()

void noodle_delete_buffer ( float *  buffer)

Free a buffer allocated by noodle_create_buffer.

◆ noodle_delete_file()

void noodle_delete_file ( const char *  fn)

Delete a file if it exists.

◆ noodle_dwconv_float() [1/3]

uint16_t noodle_dwconv_float ( const char *  in_fn,
uint16_t  n_channels,
const char *  out_fn,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb 
)

Depthwise convolution (float input/output; params from files).

For each input channel I, reads the I-th input feature map from in_fn (tokenized by I), convolves it with the depthwise kernel read from conv.weight_fn (also tokenized by I), adds bias from conv.bias_fn (one bias per input channel), applies activation, and writes the output feature map to out_fn (tokenized by I). Requires temp buffers set via noodle_setup_temp_buffers.

Parameters
in_fnBase input filename template (receives I).
n_channelsNumber of input/output channels.
out_fnBase output filename template (receives I).
WInput width/height.
convConvolution parameters (K, P, S, weight_fn, bias_fn, act).
poolPool parameters (kernel M, stride T).
progress_cbOptional progress callback in [0,1].
Returns
Output width after pooling.

◆ noodle_dwconv_float() [2/3]

uint16_t noodle_dwconv_float ( float *  input,
uint16_t  n_channels,
float *  output,
uint16_t  W,
const Conv conv,
const Pool pool,
CBFPtr  progress_cb 
)

Memory → memory depthwise conv (float input).

Assumes:

  • input layout: [C][W][W] flattened
  • output layout: [C][Wo][Wo] flattened (Wo depends on pooling)
    Parameters
    inputPointer to the input tensor in [C][W][W] layout.
    n_channelsNumber of input/output channels.
    outputPointer to the output tensor in [C][Wo][Wo] layout.
    WInput width/height.
    convConvolution parameters (K, P, S, weight_fn, bias_fn, act).
    poolPool parameters (kernel M, stride T).
    progress_cbOptional progress callback in [0,1].
    Returns
    Output width after pooling.

◆ noodle_dwconv_float() [3/3]

uint16_t noodle_dwconv_float ( float *  input,
uint16_t  n_channels,
float *  output,
uint16_t  W,
const ConvMem conv,
const Pool pool,
CBFPtr  progress_cb 
)

Memory → memory depthwise conv (float input) with in-variable weights/bias. Assumes:

  • input layout: [C][W][W] flattened
  • output layout: [C][Wo][Wo] flattened (Wo depends on pooling )
    Parameters
    inputPointer to the input tensor in [C][W][W] layout.
    n_channelsNumber of input/output channels.
    outputPointer to the output tensor in [C][Wo][Wo] layout.
    WInput width/height.
    convConvolution parameters with in-variable weights/bias.
    poolPool parameters (kernel M, stride T).
    progress_cbOptional progress callback in [0,1].
    Returns
    Output width after pooling.

◆ noodle_fcn() [1/8]

uint16_t noodle_fcn ( const byte input,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
const FCNFile fcn,
CBFPtr  progress_cb = NULL 
)

Memory→Memory fully-connected layer (byte inputs; params from files).

Parameters
inputByte array of length n_inputs (0..255 interpreted as float).
n_inputsNumber of inputs.
n_outputsNumber of outputs.
outputFloat array of length n_outputs (written).
fcnFilenames for weights/bias and activation mode.
progress_cbOptional progress callback.
Returns
n_outputs.

◆ noodle_fcn() [2/8]

uint16_t noodle_fcn ( const char *  in_fn,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
const FCNFile fcn,
CBFPtr  progress_cb = NULL 
)

File→File fully-connected layer (float text inputs; params from files).

For each output neuron O, rewinds in_fn, accumulates dot(W[O], x) + b[O], applies activation, and appends to out_fn.

Parameters
in_fnInput filename containing n_inputs floats (one per line).
n_inputsNumber of inputs.
n_outputsNumber of outputs.
out_fnOutput filename (appends/overwrites as created).
fcnFilenames for weights/bias.
progress_cbOptional progress callback in [0,1].
Returns
n_outputs.

◆ noodle_fcn() [3/8]

uint16_t noodle_fcn ( const char *  in_fn,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
const FCNFile fcn,
CBFPtr  progress_cb = NULL 
)

File→Memory fully-connected layer (float output; params from files).

Reads inputs from in_fn for each output neuron O, computing y[O] = dot(W[O], x) + b[O], then applies activation.

Parameters
in_fnInput filename with n_inputs floats per forward pass.
n_inputsNumber of inputs.
n_outputsNumber of outputs.
outputFloat array of length n_outputs (written).
fcnFilenames for weights/bias.
progress_cbOptional progress callback.
Returns
n_outputs.

◆ noodle_fcn() [4/8]

uint16_t noodle_fcn ( const float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
const FCNFile fcn,
CBFPtr  progress_cb 
)

Memory→File fully-connected layer (float inputs; params from files).

Computes y = W·x + b, optionally applies activation, and writes n_outputs lines to out_fn.

Parameters
inputPointer to n_inputs float values.
n_inputsNumber of inputs.
n_outputsNumber of outputs.
out_fnOutput filename (one float per line).
fcnFilenames for weights and bias; weights read row-major [O, I].
progress_cbOptional progress callback in [0,1].
Returns
n_outputs.

◆ noodle_fcn() [5/8]

uint16_t noodle_fcn ( const float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
const FCNFile fcn,
CBFPtr  progress_cb 
)

Memory→Memory fully-connected layer (float output; params from files).

Reads inputs from input for each output neuron O, computing y[O] = dot(W[O], x) + b[O], then applies activation.

Parameters
inputFloat array of length n_inputs.
n_inputsNumber of inputs.
n_outputsNumber of outputs.
outputFloat array of length n_outputs (written).
fcnFilenames for weights/bias.
progress_cbOptional progress callback.
Returns
n_outputs.

◆ noodle_fcn() [6/8]

uint16_t noodle_fcn ( const float *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
const FCNMem fcn,
CBFPtr  progress_cb = NULL 
)

Memory→Memory fully-connected layer (float inputs; explicit in-variable weights/bias).

Weights are row-major [n_outputs, n_inputs] and biases length n_outputs.

Parameters
inputFloat array of length n_inputs.
n_inputsNumber of inputs.
n_outputsNumber of outputs.
outputFloat array of length n_outputs (written).
fcnin-variable weights/bias and activation.
progress_cbOptional progress callback in [0,1].
Returns
n_outputs.

◆ noodle_fcn() [7/8]

uint16_t noodle_fcn ( const int8_t *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
const char *  out_fn,
const FCNFile fcn,
CBFPtr  progress_cb = NULL 
)

Memory→File fully-connected layer (int8 inputs; weights/bias from files).

Computes y = W·x + b, optionally applies ReLU, and writes n_outputs lines to out_fn.

Parameters
inputPointer to n_inputs int8 values.
n_inputsNumber of inputs.
n_outputsNumber of outputs.
out_fnOutput filename (one float per line).
fcnFilenames for weights and bias; weights read row-major [O, I].
progress_cbOptional progress callback in [0,1].
Returns
n_outputs.

◆ noodle_fcn() [8/8]

uint16_t noodle_fcn ( const int8_t *  input,
uint16_t  n_inputs,
uint16_t  n_outputs,
float *  output,
const FCNFile fcn,
CBFPtr  progress_cb = NULL 
)

Memory→Memory fully-connected layer (int8 inputs; params from files).

Parameters
inputInt8 array of length n_inputs.
n_inputsNumber of inputs.
n_outputsNumber of outputs.
outputFloat array of length n_outputs (written).
fcnFilenames for weights/bias and activation mode.
progress_cbOptional progress callback.
Returns
n_outputs.

◆ noodle_find_max()

void noodle_find_max ( float *  input,
uint16_t  n,
float &  max_val,
uint16_t &  max_idx 
)

Find the maximum value and its index in a float array.

Parameters
inputPointer to the input float array.
nLength of the input array.
max_valReference to store the maximum value found.
max_idxReference to store the index of the maximum value.

◆ noodle_flat() [1/2]

uint16_t noodle_flat ( const char *  in_fn,
float *  output,
uint16_t  V,
uint16_t  n_filters 
)

File→Memory flatten: reads n_filters feature maps from files named by in_fn

(tokenized by O via ::noodle_n2ll at positions 4/6 as appropriate) and writes a vector of length V×V×n_filters in row-major [i* n_filters + k].

Parameters
in_fnBase filename of pooled feature maps (receives O).
outputOutput buffer of length V×V×n_filters.
VSpatial size (width=height).
n_filtersNumber of channels (O).
Returns
V×V×n_filters.

◆ noodle_flat() [2/2]

uint16_t noodle_flat ( float *  input,
float *  output,
uint16_t  V,
uint16_t  n_filters 
)

Memory→Memory flatten: flattens [O, V, V] into a vector of length V×V×n_filters.

Parameters
inputBase pointer to stacked feature maps [O, V, V].
outputOutput buffer of length V×V×n_filters.
VSpatial size.
n_filtersNumber of channels O.
Returns
V×V×n_filters.

◆ noodle_fs_init() [1/4]

bool noodle_fs_init ( )

Initialize SD/FS backend with default pins/settings.

◆ noodle_fs_init() [2/4]

bool noodle_fs_init ( uint8_t  clk_pin,
uint8_t  cmd_pin,
uint8_t  d0_pin 
)

Initialize SD/FS backend (pins variant is meaningful only for SD_MMC).

◆ noodle_fs_init() [3/4]

bool noodle_fs_init ( uint8_t  clk_pin,
uint8_t  cmd_pin,
uint8_t  d0_pin,
uint8_t  d1_pin,
uint8_t  d2_pin,
uint8_t  d3_pin 
)

Initialize SD/FS backend with default pins/settings.

◆ noodle_fs_init() [4/4]

bool noodle_fs_init ( uint8_t  cs_pin)

Initialize SD/FS backend with a specific CS_PIN.

◆ noodle_gap()

uint16_t noodle_gap ( float *  inout,
uint16_t  C,
uint16_t  W 
)

Global Average Pooling for a channel-first tensor in memory.

Parameters
x_chwPointer to the input tensor in [C][W][W] layout.
CNumber of channels.
WWidth/height of each channel plane.
Returns
Number of channels C

◆ noodle_grid_from_file() [1/6]

void noodle_grid_from_file ( const char *  fn,
byte buffer,
uint16_t  K 
)

Read an K byte grid (stored as float) from fn into buffer.

◆ noodle_grid_from_file() [2/6]

void noodle_grid_from_file ( const char *  fn,
float *  buffer,
uint16_t  K 
)

Read an K × K grid (stored as float) from fn into buffer.

◆ noodle_grid_from_file() [3/6]

void noodle_grid_from_file ( const char *  fn,
int8_t *  buffer,
uint16_t  K 
)

Read an K × K grid (stored as float) from fn into buffer.

◆ noodle_grid_from_file() [4/6]

void noodle_grid_from_file ( NDL_File &  fi,
byte buffer,
uint16_t  K 
)

Read an K × K grid (stored as byte) from fi (opened file handler) into buffer.

◆ noodle_grid_from_file() [5/6]

void noodle_grid_from_file ( NDL_File &  fi,
float *  buffer,
uint16_t  K 
)

Read an K × K grid (stored as float) from an opened file handler fi into buffer.

◆ noodle_grid_from_file() [6/6]

void noodle_grid_from_file ( NDL_File &  fi,
int8_t *  buffer,
uint16_t  K 
)

Read an K × K grid (stored as int8) from fi (opened file handler) into buffer.

◆ noodle_grid_to_file() [1/4]

void noodle_grid_to_file ( byte grid,
const char *  fn,
uint16_t  n 
)

Write an n byte grid to fn as bytes, row-major. File will be opened and closed.

◆ noodle_grid_to_file() [2/4]

void noodle_grid_to_file ( byte grid,
NDL_File &  fo,
uint16_t  n 
)

Write an n byte grid to fo (opened file handler) as bytes, row-major. No file open and close operations.

◆ noodle_grid_to_file() [3/4]

void noodle_grid_to_file ( float *  grid,
const char *  fn,
uint16_t  n 
)

Write an n float grid to fn, row-major.

◆ noodle_grid_to_file() [4/4]

void noodle_grid_to_file ( float *  grid,
NDL_File &  fo,
uint16_t  n 
)

Write an n float grid to fo (an opened file handler), row-major.

◆ noodle_read_bytes_until()

size_t noodle_read_bytes_until ( NDL_File &  file,
char  terminator,
char *  buffer,
size_t  length 
)

Read bytes from a file until a terminator or length-1 (NULL terminated).

Parameters
fileOpen file handle.
terminatorStop when this character is read (not stored).
bufferDestination buffer (will always be NULL terminated).
lengthMaximum bytes to write into buffer including the NULL.
Returns
Number of characters written (excluding NULL).

◆ noodle_read_top_line()

void noodle_read_top_line ( const char *  fn,
char *  line,
size_t  maxlen 
)

Read the first line of a given text file.

Parameters
fnFile name to read.
lineReading result.
maxlenMaximum character length to read.

◆ noodle_relu()

uint16_t noodle_relu ( float *  input_output,
uint16_t  n 
)

In-place ReLU over a length-n vector. Returns n.

◆ noodle_reset_buffer()

void noodle_reset_buffer ( float *  buffer,
uint16_t  n 
)

Fill buffer with zeros (n floats).

◆ noodle_setup_temp_buffers() [1/2]

void noodle_setup_temp_buffers ( void *  b1,
void *  b2 
)

Provide two reusable temporary buffers used internally by file-streaming operations.Must be called before conv/FCN variants that read from files. Two temp buffers are needed for operations that read from a file. For C*W*W tensor, the buffer should be W*W.

Parameters
b1Buffer #1 (input scratch). See size guidance above.
b2Buffer #2 (float accumulator). See size guidance above.

◆ noodle_setup_temp_buffers() [2/2]

void noodle_setup_temp_buffers ( void *  b2)

Provide a single reusable temporary buffer used internally by file-streaming ops.Must be called before conv/FCN variants that read from files. One temp buffer is needed for operations that reads from a variable. Hence, only output accumulator buffe is needed. For C*W*W tensor, the buffer should be W*W.

Parameters
b2Buffer #2 (float accumulator). See size guidance above.

◆ noodle_sigmoid()

uint16_t noodle_sigmoid ( float *  input_output,
uint16_t  n 
)

In-place sigmoid over a length-n vector. Returns n.

◆ noodle_slice()

float * noodle_slice ( float *  flat,
size_t  W,
size_t  z 
)
inline

Slice a stacked [Z, W, W] tensor laid out as contiguous planes.

Parameters
flatPointer to base of the contiguous array.
WWidth/height of each 2D plane.
zPlane index to slice.
Returns
Pointer to the start of plane z (no bounds checks).

◆ noodle_soft_max()

uint16_t noodle_soft_max ( float *  input_output,
uint16_t  n 
)

In-place softmax over a length-n vector. Returns n.

◆ noodle_unpack_bn_params()

void noodle_unpack_bn_params ( const float *  bn_params,
uint16_t  C,
const float **  gamma,
const float **  beta,
const float **  mean,
const float **  var 
)

Unpack batch normalization parameters from a flat array.

Parameters
bn_paramsPointer to the packed batch normalization parameters.
CNumber of channels.
gammaOutput pointer to the per-channel scale parameters.
betaOutput pointer to the per-channel shift parameters.
meanOutput pointer to the per-channel mean parameters.
varOutput pointer to the per-channel variance parameters.