Workshop Description
Full-day workshop on tensor network methods for logistics time-series modelling. Covers MPS, DMRG, and tensor train decomposition applied to multi-SKU demand sensing, IoT anomaly detection, and multi-variate forecasting on classical hardware.
Logistics data is high-dimensional and sparse. Thousands of SKUs, hundreds of correlated features (weather, promotions, traffic, supplier lead times), and irregular time-series with missing values. Standard deep learning models require large training sets that logistics organisations rarely have at the individual SKU level. Tensor network methods, originally developed for quantum many-body physics (White 1992, DMRG), offer an alternative. Matrix product states (MPS) represent high-dimensional probability distributions with parameter counts that scale linearly rather than exponentially with feature dimension. Tensor train decomposition compresses multi-variate time series while preserving the correlation structure that matters for forecasting. These methods run on standard GPUs and CPUs today. No quantum hardware is required. The quantum connection is structural: tensor networks describe the same mathematical objects as quantum circuits, and variational tensor network training on future quantum devices could accelerate model optimisation for problems where classical contraction is the bottleneck. This workshop teaches participants to build, train, and deploy tensor network models for logistics applications using the ITensor library, with rigorous comparisons against PCA, autoencoders, and LSTM baselines.
What participants cover
- Tensor network fundamentals: MPS, DMRG (White 1992), and tensor train decomposition for high-dimensional data compression
- Multi-SKU demand modelling: capturing cross-product correlations with sparse sales signals using bond dimension tuning
- IoT anomaly detection: tensor decomposition of fleet telematics and warehouse sensor time-series for fault identification
- Multi-variate forecasting: jointly modelling weather, traffic, inventory, and demand as entangled time-series variables
- Classical deployment: running tensor network models on GPUs/CPUs via ITensor, with no quantum hardware dependency
- Quantum acceleration path: how variational tensor network training on future quantum devices could improve optimisation for large-scale instances