DLCC
Deep Learning and Continuous-Time Computing
Artificial Intelligence (AI) is one of the objectives of Machine Learning: how to create computers capable of intelligent behavior. Deep Learning represents a remarkable step toward this direction. In deep learning, multiple and sequential layers of processing build a hierarchical representation of information. Current state-of-the-art AI in the form of deep learning is based on deep neural networks; these neural networks have recently demonstrated breakthrough performance in various AI-cognitive tasks, from image and speech recognition to natural language generation and playing ATARI games. However, in real applications, the agent state or its environment may vary rapidly and unexpectedly: the whole network has then to be updated continuously, causing an inefficient computational load. A programmer has to decide on the step-size as a time-representation, choosing between a fine-grained representation of time or to a coarse temporal resolution. The former corresponds to many state changings that are difficult to learn, while in the latter these changings are easier to learn but lack precise timing and imply the loss of important features. The reduction of the step-size leads to two consequences: the learning rule has to be formalized in a continuous-time approximation; the internal processing of the state has to be decoupled from the asynchronous events of the environment.
Spiking neural networks (SNNs) represents one solution for an efficient continuous-time representation and computing. Spiking neurons refine the computational unit in standard neural networks, the neuron, as they closely model the asynchronous mode of communication of their biological counterparts. In principle, spiking neural networks allow rapid and asynchronous neural computation while limiting the computational and energetic cost of network communication. Indeed, spiking neurons operate at high temporal precision, while the computationally most costly aspect of neural network computation, the network communication, can run at a much slower and asynchronous rate.
The computational power of deep learning can only be exploited in a continuous-time environment if adequate acceleration hardware is used. General-purpose graphical processing unit (GPU) computing represent one ideal example. For instance, the next generation of NVIDIA’s GPU architectures has been designed to accelerate deep learning applications, such as image classification, video analysis, speech recognition, natural language processing and even self-driving cars. Moreover, it has been shown that GPUs accelerate spiking neural network simulations considerably so that executing large-scale computational models of brain structures in real time is now possible. The SpiNNaker neuromorphic architecture is another important well-know acceleration hardware, which offers a scalable and more energy-efficient solution for large-scale real-time neural simulation. This highly flexible software-configurable neural modeling platform supports SNN description languages such as PyNN and NENGO, and has configuration software (developed and maintained within the EU Flagship Human Brain Project) that translates these descriptions into run-time executables. Over 40 SpiNNaker hardware systems are in use by different research groups around the world. In the field of neuromorphic computing it is worth noting the results obtained with Silicon Neurons (SiNs). A new generation of compact chips currently developed emulates the neural organization and function of thousands of neurons in electronic devices. Neuromorphic SiN networks are much more efficient than simulations executed on general purpose computers and the speed of the network is independent of the number of neurons or of their coupling. In conclusion, different hardware approaches aim at exploiting current improvements in asynchronous computing for real applications. These advanced technologies reflect the rising interest in bringing deep learning and asynchronous continuous-time computing in the next generation of intelligent machines.
This Track aims to consolidate the current state-of-the-art in deep learning, continuous-time computing, spiking neural networks and related acceleration hardware tools, showing recent and future progresses of this rising and growing research field.
Major topics of interest include but are not limited to the following:
- Deep Learning
- Continuous-time learning
- Spiking Neural Networks
- Asynchronous computation
- Large scale parallel simulations and computing
- Neuromorphic computing
You can download the DLCC CFP here