Abstract
This work connects models for virus spread on networks with their equivalent neural network representations. Based on this connection, we propose a new neural network architecture, called Transmission Neural Networks (TransNNs) where activation functions are primarily associated with links and are allowed to have different activation levels. Furthermore, this connection leads to the discovery and the derivation of three new activation functions with tunable or trainable parameters. We prove that TransNNs with a single hidden layer and a fixed non-zero bias term are universal function approximators. Moreover, we present new derivations of continuous time epidemic network models based on TransNNs.