Know much about
"Population coding in the cerebellum", The cerebellum resembles a feedforward, three-layer network of neurons in which the "hidden layer" consists of Purkinje cells (P-cells), and the output layer consists of deep cerebellar nucleus (DCN) neurons. However, unlike an artificial network, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? To consider these questions, I apply elementary mathematics from machine learning and assume that the output of each DCN neuron is a prediction that is compared to the actual observation, resulting in an error signal that originates in the inferior olive. This signal is sent to P-cells via climbing fibers that produce complex spikes. The same error signal from the olive must also guide learning in the DCN neurons, yet the olivary projections to the DCN are weak, particularly in adulthood. However, P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppresses activity of the DCN neuron that produced the erroneous output. Viewed in the framework of machine learning, it appears that the olive organizes the P-cells into populations so that through complex spike synchrony each population can act as a surrogate teacher for the DCN neuron it projects to. This error-dependent grouping of P-cells into populations allows one to understand how P-cell simple spikes contribute to control of movements.