Liquid Time-constant Neural Networks
- article link
- formal definition
- universal approximator
- inspired by the nervous system dynamics of small species
- synaptic transmission from neuron j to i depends on the state of the neuron i
- this enables a time constant which is dependent on the state of i
- stable and bounded behavior
- article link
- practical implementation, expressiveness
- dx(t)/dt = − (1/τ + f(x(t),I(t), t, θ)) x(t) + f(x(t),I(t), t, θ)A
- neural network f not only determines the derivative of the hidden state x(t), but also serves as an input-dependent varying time-constant
- this enables single elements of the hidden state to identify specialized dynamical systems for input features
- novel ODE solver (Fused Solver)
- Runge-Kutta-based solvers require an exponential number of discretization steps with LTC (stiff ODEs)
- trained with BPPT
- superior expressivity, improved modeling performance
- article link
- neural circuit policies (NCP)
- sensory, inter-neuron, command, motor
- feedforward connections + recurrent between inter-neuron and command
- synapses based on random sampling
- sparse, parameter-efficient
- article link
- causality
- differential equations can form causal structure
- neural odes are not causal models
- LTCs are dynamic causal models
- training LTCs via Gradient Descent yields causal models
- article link
- Liquid Structural State-Space Models
- Liquid networks are nonlinear SSMs with an input-dependent state transition module that enables them to learn to adapt the dynamics of the model to incoming inputs, at inference, as they are dynamic causal models
- article link
- Closed-form Continuous-time Neural Networks
- article link
- flight navigation: