Enter Temporal Event Based Neural Nets . BrainChip, the first company to commercialize neuromorphic or event-based processing IP, has extended this to efficiently combine spatial and temporal convolutions to process sequential data in an innovative approach. A significant benefit is that it overcomes the training complexity and trains just like the simpler CNNs, but with the added benefit of reducing models' size without losing accuracy.
Overall, TENNs are a powerful tool for processing time series data. They can learn to represent the temporal structure of the data and make predictions for future time steps. But TENNs go further and start treating streaming inputs like video like a time series of frames, performing a 3D convolution comprising of a temporal convolution on the time axis and a spatial convolution on the XY axis.
BrainChip’s Akida brings this innovative ability to look at vision, video, and other three-dimensional data as time series. An object visible through multiple two-dimensional frames, computed with the time element as the third dimension, makes video object detection much more effective.
The second-generation Akida processor IP is available now from BrainChip for inclusion in any SoC and comes complete with a software stack tuned for this unique architecture. We encourage companies to investigate this technology, especially those implementing time series or sequential data applications.