Back in 2014 I read this paper, from Judith Hirsch’s lab. To my simple mind, it was a pretty complex model, but it had a cute little video of the thalamus reducing position noise from the retina. I’m not going to lie, I still don’t fully understand the original paper, but probably due to an urge to avoid doing experiments, I felt drawn to make simple integrate and fire model of the retina -> thalamus circuit to see if it filtered noise. The result were somewhat predictable, and I tucked them away. But then an opportunity came to fish them out of the proverbial desk draw, and then I noted something quite interesting: The way the individual cells filtered their input was the opposite of how the network filtered its input. It led me to publish this article. Below you can play with some of the simulations I used to produce the paper*.
Firstly, a warning: There will be some hand waving here. For once, this is not because I don’t know what I’m talking about, but more in an attempt to make things general and simple. Leaky integrate-and-fire neurons are nice little computational objects. And at a first approximation, they do a pretty good job of mimicking real neurons. More importantly for this discussion, they nearly always behave in some kind of “high-pass” way. That is, below some input rate, you get no output, and above it, you get more. Moreover, at least for some part of this input/output function, if you double the input rate, you get more than double the output rate. If you don’t believe me, try the little simulation below. Drag the slider (slowly!) to the right to increase the current to the “retinal” neuron. When it fires, it produces an EPSP in the “thalamic” neuron. If the retinal neuron fires fast enough, it will drive spiking in the thalamic neuron. A graph is then built up of the input firing rate against the ratio of the output firing rate to the input firing rate. If this system showed no filtering, you would expect this to be a horizontal line (i.e. doubling the input doubles the output). But as you can see, this is clearly not the case. One must admit that there is at least SOME form of high-pass filtering, as low frequency input does nothing.
However, let’s put these cells into a network. We can make the retinal cells “see” an object, by driving current into them when the object is in front of them, and then hook their output to a layer of faux-thalamic neurons. Using two of these networks, one which sees the X position of the object, and the other that sees the Y position of the object, we should be able to reconstruct the X-Y position of the object. In order to extract the information from from the layer, I played around using a maximum likelihood estimator based on the firing pattern of the neurons at any one moment of time. However, I was unable to figure out the formula (and neither were any mathematicians I asked). However, I could numerically calculate this, and I found it produced almost identical values to simply using a center of mass algorithm (i.e. every firing neuron contributed it’s position to an average). Now, if we decode the position in each layer, what do we see? (Drag the green object around and see where the two layers think it is. You can change the noise in the system with the slider)
I think you can agree that something like a low-pass filter exists between the information in input (retinal) layer, and the second (thalamic) layer. That is, the thalamic layer is not effected by the brief random spikes in the retinal layer (esp when the noise in increased). There is also a phase shift, i.e. the response in the thalamus happens later than it does in the retina. Thus, the network (low-pass) is behaving differently to the individual cells (high pass). This may seem like a straw-man argument, as I am conflating the frequency of the object moving, and the frequency of EPSP input. This is a legitimate complaint, however, I assure you, similar conflations are prevalent in the literature. Moreover, it is an illustrative example, showing how careful we need to consider extrapolating filtering at the cellular level to the network level. If you’re still not convinced, let me paint you more explicit example:
We have an object with a property that can be in two mutually exclusive states, think bright or dark, or orientated horizontally or vertically. We then have a neuron that codes for one of these states. Like a lot of sensory neurons, it doesn’t respond when the object stays in one state, but responds best when the object changes to the state. There is a neighboring neuron that codes for the mutually exclusive state, and it too prefers when the property is flashed on and off. If we flick the property faster and faster between the states, we will see the neurons fire faster and faster. One might then conclude that the network will encode which state the object is in the best when the objective is rapidly changing between states. However, in reality, the network will probably have no idea, as the two neurons will be active at the same time, and hence will only be able to decode some mix of the two states.
Again, this may seem trivial or obvious, but this kind of logic is everywhere: Investigate the behavior of individual cells, and then infer that the network probably behaves in a similar fashion. As I believe I have shown (with some hand waving I admit), this is a very dangerous assumption.
To put it still a third way, neurons that rely on sustained high frequency input to fire (that is, they behave as high-pass filters) can only be driven by a stimuli that is constant, thus the network itself will only respond to slowly changing stimuli (that is a low-pass filter).
If you want to see a bit less hand waving, you should read the full article.
*Note, the models shown here are translated from my original Python code. While I tried my best, I make no guarantees that they produce identical results to the original paper.