Quantum Effects in Synaptic Neurons and Their Networks in the Brain
Paul Levi
Institute for Parallel and Distributed Systems (IPVS), Faculty for Informatics, Electrical Engineering and Information Technology, University Stuttgart, Stuttgart, Germany
Email address:
To cite this article:
Paul Levi. Quantum Effects in Synaptic Neurons and Their Networks in the Brain. European Journal of Biophysics. Vol. 4, No. 6, 2016, pp. 47-66. doi: 10.11648/j.ejb.20160406.11
Received: January 2, 2017; Accepted: January 10, 2017; Published: February 10, 2017
Abstract: This article describes small neurotransmitters as particles of a spinless quantum field. That is, the particles are Bosons that e.g. can occupy equal energy levels. In addition, we consider the particles of the presynaptic region before exocytosis occur as elements of a grand canonical ensemble that is in a thermodynamic equilibrium. Thus, the particles obey the Bose-Einstein statistics, which also determines the corresponding information entropy and the corresponding density matrix. When the release of neurotransmitters occur, the equilibrium collapses and the Bose-Einstein distribution transfers to the Poisson distribution. Moreover, the particles transmit as wave packets, with quantized energies and momenta, through the chemical synapses, where we also describe the effects of the quantum fluctuations. We mark this symmetry braking process that corresponds to a non-equilibrium phase transition by a threshold, which mainly depends on the mean of the particles number, with defined quanta. We model the connections of synaptic neurons of a population to a network by Hamiltonians that include both Bosons and Fermions and their interactions. Bosons are the carriers of messages (information) and Fermions are the switches, which forward these messages, with a modified content. The effects we observe in such a neural circuitry reveals a strong dependence of the solutions from the initial values and, more relevant, solutions with chaotic behavior exist. These circuitry-based ramifications together with possible internal malfunctioning of particular neurons (e.g. intermitted flow) of the network cause a sustainable reduction of the synaptic plasticity.
Keywords: Quantum Field of Bosons, Thermodynamics, Symmetry Braking, Quantum Fluctuations, Neural Quantum Circuitry
1. Introduction
Several authors over years already analyzed and described quantum effects in the brain as a part of the comprehensive field of quantum biology [20]. For example, quantum computation in the brain and consciousness [3], [10], quantum dynamics in noisy environment [15], and molecular robotics [21].
Here, we focus upon internal processes of synaptic neurons and their connections to non-dendritic networks by a combination of the methods of the quantum field theory and thermodynamics. This implies that we consider small neurotransmitters as particles of a spinless quantized field that obey the commutation rules of Bosons, and the correlation function of two Bosons at different times is zero, when their momenta are unequal. The last mentioned effect is relevant when we analyze the impact of quantum fluctuations (coherent or incoherent solutions) of the released wave packets into the synaptic cleft.
A grand canonical ensemble characterizes the thermodynamic equilibrium of the particles before their release; therefore, the Bose-Einstein distribution function is relevant. The ensemble represents an open system, with reversible transitions. This implies that the number of particles in the ensemble changes, when we evaluate the variations of free energy and of the entropy. Here, the entropy plays a dominant role; because it also delivers the density matrix evaluated by a combination of quantum field theory and the Bose-Einstein statistics.
We describe the release process of particles, with respect to a particular variance-based threshold defined by the relevant distribution function. Well beneath this threshold, the Bose-Einstein statistics is applicable; since the equilibrium state of the grand canonical ensemble is stable and no particles are send out. In the region of the threshold, the Bose-Einstein statistics with an intermediate variance characterizes this region. We approximate the corresponding distribution function by a Gaussian function, whose variance is equal to the mean expectation value of quanta with a given value. Well above the threshold, the equilibrium collapse, because the released particles transmit with higher quantized momenta through the chemical synapses and the on-going processes in the cleft inhibit a thermodynamic equilibrium. However, the spilling out of particles nevertheless represent discrete events, which are now "governed" by the Poisson distribution. This distribution describes a steady flux of particles and stable densities of them, where both effects together are strong enough to cause persisting, non-intermitting effects at receptors.
The transition from the equilibrium state to a non-equilibrium state together with the variation of the distribution functions represent a symmetry braking that is called non-equilibrium phase transition [13].
The interplay of "senders" (SNAREs and vesicle fusion at the active zone) and receivers represent a kind of self-organization between the pre- and postsynaptic regions of two connected neurons. We distinctly extend the aspect of self-organization, when we describe a linear network of neurons, which also could be to a ring. We calculate this part of our contribution is completely with methods of quantum field theory, where we also include the interaction between Bosons (carriers of messages) and Fermions (switches of messages). We perform these investigations to present what kinds of effects of non-linear dynamics and chaotic effects are observable in continuous quantum circuitries. Such networks are not comparable digital quantum computers (coherent states) since quantum fluctuations and noticeable noise effects occur.
The relevant, temporal threshold of a neuron is the time to fill, exocytose and recycle synaptic vesicles that is regularly done within one minute [23]. Thus, disruptions or retardations of the synaptic transmission in one neuron can cause e.g. malfunctioning of receivers (neurological diseases). However, we also have to consider the additional effects, e.g. chaotic behavior that might occur in neural networks, since these effects reinforce the malfunctioning of individual neurons.
2. Particles (Materials)
2.1. Small Neurotransmitters Are Components of a Non-Relativistic Quantized Field
We investigate in this contribution to that parts of the transmission cycle of chemical synapses, performed by nano-sized neurotransmitters like amino acids (approximately 1nm) or amines (approximately 1-2 nm). Moreover, we presume that these small molecules are Bosons, which we treat as components of the corresponding non-relativistic quantized field [22]. In other words, each small molecule represents one field quantum, with quantized energy and momentum , where k is a discrete wave vector and denotes the circular frequency. The wave vector k is discrete, since we use the box normalization, because small regions (e.g. cell, synaptic cleft) enclose the particles. The operator creates and the operator annihilates one field quantum at time t, where for reasons of clarity operators we label operators by a "hat". Further, we point out that all our calculations are perform in the Heisenberg picture [26], [14], therefore, these two operators, as all operators, are time dependent. We chose this representation with respect to the frequent time dependent changes of states, e.g. in synapses. The operator product defines the number operator, whose eigenvalue counts the number of particles, which are in the state k at time t. More formally, the eigenvalue equation of the number operator reads
, with. (1)
where the normalized eigenvector reads
(2)
The many particle state takes the form
(3)
where the different eigenvalues of the number operators describe different particles, which all exist in parallel (simultaneous eigenfunctions of ).
The field momentum and the field energy of an ensemble of non-interacting spinless particles at some time t are
and, (4)
where counts the number of particles, which are in state.
Since, each specifies only the number of identical particles, with energy state and momentum state (k-mode of the field that corresponds to a particular oscillator), the bosonic particles are indistinguishable.
For a spin 1 field (massive vector field) we have to amend the previous formulas (4) by the polarization parameter
and . (5)
For brevity, we evaluate in the following the situation of spin 0 particles. Further, we relinquish to the description of additional details of the quantum field theory and refer to the literature, e.g. [4], [25].
2.2. Small Neurotransmitters Are Components of a Grand Canonical Ensemble in a Thermodynamic Equilibrium State
We consider all Bose particles (neurotransmitters) as elements of a grand canonical ensemble before the exocytosis occur. In more details, there exist two systems and separated by a semi-permeable membrane (vesicle membrane) and both are as well in thermal as in diffuse contact. Therefore, between both systems can exchange heat and particles. The greater, outside system denotes the cytosol of a presynaptic neuron, which serves as a thermal reservoir (heat-bath) and a particle reservoir. The external system enclose the smaller, open system that denotes the inside of a vesicle, which represents an open, reversible system. Since is open, diffusion processes of neurotransmitters can take place from into the vesicles and vice versa from into (e.g. GABA and Glutamate).
The total number of particles of both systems , where also includes all neurotransmitters, which diffuse from the extracellular fluid, e.g. GABA and Glutamate, into the cytosol ( of the presynaptic terminal. The total number of particles is time-dependent, but for a short time period about 60 s we assume that is constant as it requested for a grand canonical ensemble. We chose the duration of this temporal condition, because within one minute the vesicles are filled, exocytosed and recycled [23].
Two chemical potentials: and drive the exchange of particles (influx and efflux) between both systems [19], [17]. When, for example, then the particles flow from the particle reservoir into. The standard unit of the chemical potential is k G, where G is one Gibbs; in our case the unit: (resp. chemical energy per particle) is more appropriate.
The two systems and are in a thermodynamic equilibrium, when they are in a thermal equilibrium state at constant temperature T and no diffusion of particles appear (). This also implies that a quantum-mechanical system in a thermodynamic equilibrium can exist in one or another state of temperature, where different numbers of particles constitutes each of these states. The effective energy of both systems is , where denotes the total chemical potential of both systems.
We chose the representation of a grand canonical ensemble as a supplement to the quantum field approach, because it also comprises all relevant thermodynamic quantities. Moreover, such an ensembles delivers an appropriate description of open systems without interactions (free particles). We describe the relevant quantities of a grand canonical ensemble, e.g. free energy and information entropy in sub-chapter 3.1.
3. Processes
We investigate quantum processes in the following three neural areas [8], [9]: Presynaptic area, synaptic cleft and postsynaptic region. Furthermore, we describe a population of neurons that as well shows chaotic effects. In the presynaptic region, we focus to the pre-release phase, where we proceed from a grand canonical ensemble (sub-section 2.2.)
The processes in the synaptic cleft (excluding the release phase) do not more describe a grand canonical ensemble, because the system is in a thermodynamic non-equilibrium state. The reasons for this non-equilibrium are fourfold.
At first, we have to apply for particular components of the cleft the principles of the thermodynamics of dilute solutions. That is, we can evaluate the corresponding expression for the entropy, energy, free energy etc. in a similar way, as we will do it in sub-chapter 4.1. For example, the differential change of the entropy S in a diluted solution, while the energy and the volume change, but the quantities do not vary [6] [16]
(6)
here moles of the solvent (typically water) are present and the quantities to denote the moles of several dissolved substances . Further, represents the energy of one mole of the fraction of the solute and marks the volume of one mole of the same fraction as for the internal energy. However, we will not calculate these expressions since they are not relevant for quantum based processes.
At second, typical solutes are dissolvable neurotransmitters (e.g. Acetylcholine) and ions like ,,, etc. All processes, where solid substances dissolve can also be in states of chemical equilibria in solutions. However, we do not consider such states in this article, since we do not want to describe chemical equilibria in solutions.
At third, there also exist proteins, which not hydrolyze, but can interact with other proteins. Further, also small transmitter molecules can interact. Therefore, not all such interactions are part of a grand canonical ensemble in equilibrium state.
At forth, the enzymatic degradation remove in parts transmitted neurotransmitters from the synaptic cleft during their transmission [2]. This removal represents an interaction, and not a reduction of a particle number in a varied thermodynamic equilibrium state. The remaining mechanisms to remove particles from the cleft are the diffusion through the extracellular fluid and re-uptake goes on after the released neurotransmitters reacts with postsynaptic receptors [18].
It is obvious that we do not direct this article to the investigation of the quantum-based thermodynamics of dilute solutions. In the cleft region, we focus at first to the release of the small transmitters, which corresponds a non-equilibrium phase transition. We describe this process by a threshold that differentiate between the two different distribution functions: Bose-Einstein probability function beneath the threshold and the Poisson distribution above the threshold. At second, we describe the corresponding variations of the relevant thermodynamic quantities.
We already described the quantum-based diffusion of neurotransmitters through the cleft in [15], therefore, we omit it here.
In the postsynaptic region we delineate our impact to two different distributions (Bose-Einstein statistics and Poisson processes) generating pulse-trains (spiking neurons) that take place at random times.
In short, we consider three different types of processes. First, we consider open systems where reversible processes occur, which are in the states of thermal equilibrium. The entropy of reversible cycles is zero. Second, we assume that the thermodynamic equilibrium states can get unstable and non-equilibrium transitions emerge. This means that a threshold exist, which separates the states of the system in two parts. Beneath this threshold, the system is in an equilibrium state, above the threshold the system changes to a non-equilibrium state. This is a typical case of symmetry braking [13].
4. Results
4.1. Pre-Release Phase: Particles are Components of a Grand Canonical Ensemble in Thermodynamic Equilibrium States
Now, we focus our attention entirely to the small subsystem system of a grand canonical ensembles, because the number of particles is much smaller as that one of ) and this inequality pertains for the energies . We consider the great system as an assembly of copies of the small subsystem. For this reason, we drop the subscript for all relevant quantities (e.g. ) of . Further, in this sub-section we will not derive these characteristic quantities and do not elaborately discuss them.
The distribution function of the Bose-Einstein statistics at the thermodynamic equilibrium at temperature T and chemical potential of is
(7)
where denotes the of number of Bosons at time t, which occupy the energy level (resp. are in the state ), the Boltzmann´s constant and the absolute temperature T determine the parameter .
In our quantum-based approach, the transformation of formula (7) into the following equivalent expression is more appropriate
(8)
where denotes the mean number of the particles, with energy at time t.
Notice that in our approach both quantities and are time dependent (Heisenberg representation) as we mentioned already in the previous subsection 2.1. This means that in thermodynamic equilibrium, with greater duration, e.g. the quantities and stay constant as in the classical time-independent approach. Only if the system transfers into a new equilibrium state (reversible system), then we observe for both quantities new values, since e.g. the temperature changes.
The calculation of the expected value needs some explanation, with respect of the connection between the accustomed evaluation of such values in the QFT [26] and the calculation of expectation values by the Bose-Einstein statistics, expression (8).
A state in a many particle system of non-interacting, identical particles is denoted by the normalized particle state, where each represents the occupation number of the ith one-particle state, equation (2). The expectation value reads
(9)
Obviously, this calculation of is not the one which delivers the transformation between the formulas (7) and (8). To achieve this transformation we presume at first, that all particular expected values are settled by the presumption, that is These specific expected values are then multiplied with the probability, expression (7), and finally we sum up these products
(10)
The combination of QFT-methods with the Bose-Einstein statistics is presumably more understandable if we operate with the density matrix
, with (11)
The normalization factor (trace is defined by the partition function
(12)
The expected value of the number of particle, which are in the same state k
(13)
is equivalent to the result of the direct calculation of . The total internal energy and the total number of particles follow both obviously from expression (13)
(14)
The expected value of the total number of particles is
(15)
In addition, we express the expectation value of the field momentum by
(16)
The change of the mean internal energy of a system in thermal equilibrium is
(17)
When the volume changes at a constant temperature, then the variation of the expected energy is caused by the change of the heat received by the system, S denotes the entropy (26), and the work performed by the external forces on the system is denoted by, where p is the pressure and V the volume. The third energetic component completes the law of energy conservation; it indicates the energy originating by the variation of the number of particles through their exchange with the environment. It is obvious that we formally can determine by the following partial derivative, while we held and constant
(18)
We achieve another essential characterization of the quantity when we indicate that the free energy F (respectively the entropy S) of a system is changing if we increase or decrease the number of molecules (particle exchange with the environment). Hereby, the transport of molecules from a region of higher particle concentration to lower particle concentration release free energy. Altogether, it is obvious that the chemical potential is an abstraction of molecular
The following equation defines the free energy F
(19)
, where
(20)
The free energy has a minimum, when no work by the system or by the environment has been performed, that is the system is in a state of stable thermodynamic equilibrium: = 0 and.
Besides the distribution function (7) and the chemical potential, the following thermodynamic quantities are characteristic for a grand canonical ensemble. The quantum mechanical partition function of a grand canonical ensemble is
(21)
where represents the partition function of the canonical ensemble which is only composed by one kind of particles. If m different particles compose this ensemble then we have to supplement the exponent of partition function (21) by a sum of modified exponential terms
(22)
where the index specifies the kind of the transmitters and the particular partition function reads
(23)
For the sake of simplicity, we will in the following only consider one kind of particles because all quantum relevant features of free particle are already visible, when we focus to one kind of particles (e.g. amino acids).
The partition function (21) has the remarkable feature that we can derive important thermodynamic quantities from it. Therefore, the thermodynamic potential (grand potential) at constant pressure is
(24)
which again delivers important items of the statistical physics by calculating partial derivatives, where some parameters held fixed. Therefore, for example the entropy and the expected value of all particle of the ensemble are
and (25)
gets minimal at a stable thermodynamic equilibrium, while we held the pressure constant.
Despite this obvious, remarkable usefulness of the thermodynamic potential we prefer to operate with the distribution function of the Bose–Einstein statistics because this method can also be applied to different distribution functions, which are relevant for systems that are not in an equilibrium state (non-equilibrium phase transitions), [13]. Thus, if we calculate the entropy S with the aid of the distribution function (6) then we get
(26)
The entropy S is of great importance because it delivers the information of the whole ensemble and it provides a measure of the emitted power of the whole system. The entropy S is a maximum in a stable thermodynamic equilibrium. Again, we can reproduce the expression (26) by its calculation by the use of the density matrix, formula (11)
(27)
We will perform the same calculations (26) and (27) in sub-chapter 3.2. for systems, which are not in an equilibrium state, therefore we call these quantities information entropy, because they do not determine the quantity entropy in a strong sense, which is only defined in equilibrium states.
The connection of the free energy F with the entropy while keeping the volume constant, is expressed by
(28)
Here, denotes the variance
(29)
where the term is given by
(30)
This kind of variance (29) is characteristic for the Bose Einstein statistics. This means e.g. that the released transmitters into the synaptic cleft exert a bunching effect which cause a strong variation of the density of transmitters (number of transmitters) during a fixed time interval. There is no continuous flow of transmitters, which impinge on receivers. The same effect also occurs for the emitted photons of a lamp.
It is obvious, if is zero, then the second term on the right side of equation (28) can be dropped and the standard equation
gets valid.
The infinitesimal change of the entropy during a reversible state transformation, while keeping the volume V constant, is given by
(31)
where is the amount of heat received by the open system ( at the temperature T [6].
When then we get for the infinitesimal variation of the entropy the result
(32)
When , then the calculation of delivers the standard formula
dT. (33)
The specific heat (heat capacity) of a substance confined in a constant volume is defined by the partial derivative
(34)
This quantity characterizes the amount of the variation of the reduced heat that is absorbed by the molecular substance to increase the temperature T (or vice versa). The specific heat is a measure of the thermal efficiency of the substance [12].
Up to now, we did not varied the volume V of the system at a constant temperature. We already expressed the corresponding infinitesimal change of the expected total energy by equation (17). The amount of this change of the energy can be directly derived from the equation (15) and from the explicit calculation of the expression:
(35)
The first term denotes the expression, the second term constitutes and the last term indicates [7].
The infinitesimal change of the entropy, when the volume is varied, the temperature T is held constant and the variation of the expected particle number is not zero, is expressed by
(36)
.
The system generates the heat themselves (exothermic process), therefore the quantity is negative. The correctness of the expression (36) can be easily confirmed, when we solve the equation (17) for . According to equation (17), the general, differential variation of the free energy (19) while the volume V, temperature T and the particle number is not constant, reads
(37)
4.2. Release Phase: Non-Equilibrium Phase Transition
In the molecular view, the arrival of action potential triggers the opening of a gated calcium channels (selective permeability) for ions. The driving force that initiates the diffusion of ions due to their high concentration at the presynaptic membrane into the region of low calcium concentration at the cleft we can describe by additional ionic, chemical potentials and. The resulting elevation of the presynaptic calcium concentration causes the release of the neurotransmitters. The calcium influx into the synaptic cleft terminates, when the corresponding ionic equilibrium state is established if.
The diffusion from the particles of the system into the system is driven by . The ionic and particle efflux into the synaptic cleft destroys the thermodynamic equilibrium state of and generate a non-equilibrium state of . The influx of ions and particles into the cleft terminates when both kinds of chemical potentials get equal.
In the following we neglect the chemical and molecular details of the release phase and concentrate on distribution functions (sub-section 3.2.1.) and on dominant thermodynamic quantities in non-equilibrium states as the information entropy S and the free energy F (sub-section 3.2.3.).
4.2.1. Transition of the Bose-Einstein Distribution Function to the Poisson Distribution
Here, we describe the release of neurotransmitters into the synaptic cleft. In the view of thermodynamics, this process represents a non-equilibrium transition. The previous thermodynamic equilibrium of the grand canonical ensemble resolves and converts to a system that is no more in a stable equilibrium state. The most particles of the ensemble are directly emitted into the diluted solution of the synaptic cleft. At first, we now ask; "What is the probability to observe particles, during a time interval T, when the mean value of particles is fixed? We introduce the new notation e.g. , where the capital T represents a time period in opposition to the so far used Heisenberg notation ; where, the lowercase t indicates the actual time. We make this change of notation to comply with the standard definition of the Poisson distribution. The answer is given by the Poisson distribution, where the kind of particles are still Bosons) is
(38)
When, the time period is sufficient long, then both and get time independent, however we stay to use T in order to differentiate it to the actual time t applied in the Heisenberg notation.
The notation of the Poisson distribution given by (38) may be unaccustomed for some reader; therefore, we rewrite this formula into the common form [5]
(39)
where is the mean counting rate of particles, with state , during the time interval T. However, when we ask: "What is the probability that a particle is emitted during an infinitesimal time interval, where t is centered?" Then the answer is [7]
(40)
Characteristic for the distribution (38) is the equivalence of the mean value and the variance
(41)
where the tilde denotes the variance of the Poisson distribution. This identity characterizes the effect that the emitted particles are not bunching as in the case of the Bose-Einstein statistics (29). We consider this anti-bunching effect of the "Poisson region" as the appearance of an ordered structure.
The thermodynamic reason that initiate the transfer of the equilibrium Bose-Einstein distribution (7) to the Poisson distribution is the non-equilibrium phase transition, which also cause the massive increase of particles in the synaptic cleft.
We characterize this symmetry braking transition that leads to the dominant transition of the original Bose-Einstein distribution function to the Poisson regime by the function , where the reasonable value marks the threshold where this change occurs. To calculate this function we rewrite the general variance expression as
(42)
Thus, the function describes the transition between the regions below the threshold, when and above the threshold, when. If we resolve equation (42) for we obtain the continuous expression [12]
(43)
where this function is measurable. When we calculate with this function the variance at the threshold then it reads
(44)
Our goal is the definition of the three distribution functions that are valid below the threshold, at the threshold and above the threshold. We achieve this goal by the continuous approximation of the discrete Poisson distribution (38) by the following Gaussian distribution function],
(45)
where the discrete variable is replaced by the continuous variable . The expectation value and the variance of this distribution function are both equal to as the Poisson distribution (38) prescribes it. This approximation gets even better as ) increase [5]. When we presume that a Gaussian distribution also governs the threshold value, then we can rewrite expression (44) as
(46)
The corresponding Gaussian based approximation reads
(47)
This function has the mean value and variance. We can continue the approximation for each value of that is greater than.
Therefore, the series of variances begins with the Bose-Einstein statistics (7), continues to the threshold value and (45) ends up (well above the threshold ) with the Poisson distribution (38). It reads
(48)
4.2.2. Product of the QFT Based Probability with the Poisson Probability
In the previous sub-chapter 3.1. we asked: "What is the probability that particles are in the state (occupy a state of energy level) at the time t?" The answer was the Bose-Einstein probability (8).
As already mentioned, the average number of quanta of state at time t reads in the Heisenberg picture as
(49)
where is a state vector.
Now, we even go one-step further and ask: "What is the probability of observing at time t a total expectation value by averaging over all particular expectation values, where each of them is multiplied by its relative frequency of its occurrence (Poisson distribution). This probability reads
(50)
We cannot ask; "What is the probability to find a particle at position x in state at time t?" because the uncertainty principle of Heisenberg , meaning that these two quantities cannot measured simultaneously. However, we simultaneously can observe the energy of a particle, since and both contribute to the same energy . Thus, the properly normalized probability amplitude to observe at position x at time t particles, with energy during the period , while is held constant, reads
(51)
where, the bra is defined by
=(x). (52)
That is, we expand the field operator in terms of the energetic steady-state functions (eigenfunctions) of the time-independent Schrödinger equation. We already defined the ket by equation (2), where for each k-value only one particle is created. The expression denotes a scalar product [4], [24].
Next, we consider an n-particle system and ask: "What is the probability amplitude that particle are localized at time t at the different positions , , …, , with the energy , particle with , particle with during a time period ?" The answer is
(53)
…,
where the sum of all possible occupied energy levels is restricted to , because only n locations exist.
4.2.3. Impact of Quantum Fluctuations: Coherent and Incoherent Solutions of Quantum Field’s Correlation
Until now, we do not considered the release of many (about up to neurotransmitters for example from one vesicle. We treat this set of particles as an emitted wave packet, spilled out into a very narrow spatial region of the synaptic cleft and their different k-values are close together. We expand such field operators by wave packets, where the correlation between two different field operators has a coherent and incoherent solution. Hereby, we evaluate only noise-free solutions; noisy solutions we will describe in sub-chapter 4.4.3.
We begin our corresponding evaluations by picking up equation (52), rewriting it for an one dimensional annihilation field operator
=(x), (54)
then project this field on the eigenfunctions (x). Thus, we obtain the equation
(55)
where we included a damping constant the fluctuating force . We get the results for tree dimensions when we multiply the one-dimensional results with each other.
The initial point of our calculation is the following correlation between field operators at different positions and different times
(56)
We write the initial state in the form
(57)
We here refrain from all details of the calculation of the coherent and incoherent solution of equation, (56), because it given in [14]. Therefore, we just represent the two solutions, whose complete solution sum defines the correlation function (56).
The coherent part of the correlation function (56) is independent of the fluctuating force. We obtain the correlation function at the location x at the time t and the position x´ at time t by the expectation value of the product
, with (58)
exp . (59)
Here the expansion factors used in equation (57) describe a Gaussian wave packet, with variance (58) and the mass m of the particle. The important result is that the coherent solution (59) is damped out, with the factor .
We obtain the incoherent part of the correlation (56) by the following summation
(60)
(x).
The term denotes the average number of particles with quanta k at temperature T.
Here, the relevant result is the difference of the exponential terms in the square bracket. This effect is clearly visible if we set , then we get the expression . This means that the incoherent part, which describes the impact of quantum fluctuations restore the probability that the mean number of particles still arrive at their goals (receivers).
4.2.4. Information Entropy Well Above the Threshold
Above the threshold, we use the Poisson distribution (38) to calculate the information entropy S
(61)
,
(Notice that T denotes a period and not an absolute temperature as in sub-section 4.1.).
We insert the following approximation into equation (61)
(62)
Thus, the results for the approximated quantity S reads
(63)
To find the final information entropywe have to sum up over all values as in expression (26)
(64)
When we insert formula (39) into equation (63), then the rewritten information entropy is
(65)
The direct comparison between the entropy (26) and the information entropy (equation 64) is only possible by a numeric comparison of both values, because the different variables of each expression. Here, we consider the neurotransmitters above the threshold as particles of a non-interacting field that is not in an equilibrium state. The expectation value of the energy of this field is
(66)
The mean value of the total number of particles reads
(67)
We consider the expression (64) as the information of the field, whilst we overtake the pre-factor .
4.3. Postsynaptic Region
The binding of specific neurotransmitter to particular receptors (proteins), distributed in the postsynaptic membrane, denotes the postsynaptic response to the release of the transmitters. The synaptic integration combines all resulting multiple synaptic potentials within one neuron. In this article we concentrate to the small neurotransmitters (quantized particles) not G-protein coupled receptors, therefore a particular spike of sequence of them represents an integration of EPSPs (excitatory postsynaptic potential) [1].
In the following, we describe the probability of such a sequence of spikes with respect to threshold , equation (42). Beneath this threshold, the Bose-Einstein statistics has its "regime. Above this threshold, the Poisson distribution "governs" the description of a spikes train and the spikes trains of a population of neurons. Beneath and above the threshold, we proceed on the assumption that such sequences are Markov processes.
4.3.1. Generation of Spikes Train Represented by Markov Processes
We assume that a Markov process gives the probability that the nth spike occurs at the time . The corresponding probability is
n, (68)
where denotes the joint probability. Further, we percept that the spatial shift between two succeeding spikes is constant (; therefore we omit to integrate the location in in this formula. Due to the chain rule, we evaluate the integrand of expression (68) by a successive product of conditional probabilities
(69)
4.3.2. Spike Trains of the Bose-Einstein Statistics
In this sub-chapter, we "speculate" that some particles can be emitted very close beneath the threshold; therefore, we calculate the spike trains for this unexpected case. To calculate the probability of a sequence of spikes that is generated by one neuron, we rewrite the standard form of the Bose-Einstein distribution (8) by setting, where denotes the mean rate over a period T. So, the rewritten distribution function reads
(70)
where indicates number of emitted spikes (events) during the time interval T.
To use the Markov formula (69) we have to define the conditional distribution function. We achieve this task when we reduce the time interval T to a particular time spot t. We fulfill these two conditions by the following conditional probability
(71)
Further, we set, meaning that in the considered short time difference only two spikes are emitted. The normalization factor of (71) is
(72)
Here, we set for the calculation of this normalization factor.
We illustrate the calculation of expression (68) by the evaluation of
(73)
The result of the analytic calculation of this integral takes the following "unsymmetrical" form, despite the symmetrical form of the original expression (73)
(74)
The calculation of the probability that the third spike happens at time delivers
(75)
.
It is tedious to calculate analytically this probability and furthermore, we cannot represent the result in a comprised form, as it was possible to . Figure 1 represents the trajectories for , where four different values (0.5, 0.6, 0.7. 0.8) are taken for the previous spiking time . We observe that the probability that at time the third spike occurs stays approximately constant with increasing. In contrast to this observation, the Poisson distribution, that we describe in the next subsection decreases and converge to zero, when the time point increases [5].
The probability that the nth spike occurs at time is
(76)
Figure 1. The probability given by expression (75). The upper curve represents the value 5, and then we always increase the values for by 0.5 until to the lowest curve, with the value 8. For all possible values of between 0.5 and 0.6, the corresponding trajectories lie between the curve for 5 and for .6, and so forth. In the headline of this figure, we denote the mean rate simply by r.
4.3.3. Spike Trains Calculated by an Approximated Poisson Distribution
We reformulate the density function the Gaussian approximation (47) by the following density function
(77)
The expectation value and the variance of the corresponding distribution function are both equal to . This equality distinctly characterizes the Poisson distribution. Our approximation and is more suited for the following calculations as the original, discrete Poisson distribution (49).
The conditional probability density is expressed by
(78)
where marks the mean time distance between to subsequent spikes, because after each individual event a refractory time span occurs. The normalization factor is.
The probability that at the time the nth spike occurs, which one neuron generates, is
, with. (79)
We now extend this approach to a population of M different neurons by applying the method of path integration [7], with the result [14]
(80)
The normalization factor is given by
(81)
We evaluate the coefficient by
(82)
where the sub-index n still denotes the nth spike and characterizes each of the M different neurons. The expression describes the temporal distance of the nth spike and the previous one that the neuron both generates.
We modify the two variables t and by the following constraints:
(83)
(84)
The quantity characterizes the time, when the nth spike of the neuron occurs; the expression represents the mean distance between all pairs of two succeeding spikes of the neuron.
It is obvious that the fluctuations of the different spiking trains make the production of n spike trains more reliable with respect to the uncertainty in total time t (83), even if the individual spikes fluctuate in time (varying time ).
4.4. Non-Dendritic Networks of Neurons
4.4.1. Model Outline
Here, we present a basic model of a linear chain of neurons without additional dendritic inputs. We formally base this approach on the sender/receiver paradigm. Figure 2 summarizes this model, where a ring of neurons can be constructed, when we fuse the first neuron with the last one ( block). We consider this connection as a simplified contribution to plasticity in the brain. The first neuron (e.g. sensory neuron) denotes the initial sender , where the succeeding units in this line (except the final receiver , e.g. a motor neuron)represent neurons, which are considered as a combined block that consists of a of a receiver and a sender component. Such a receiver/sender () neuron is a general model of the internal neural processes that we described e.g. in the previous third chapter.
Figure 2. Linear chain of N neurons that mainly consists of pairs of receiver/sender neurons (), except the initial sender and the final receiver.
All messages () that are send or received represent complex molecules that correspond to the concentrations of different neurotransmitters, which at the end triggers the appropriate spikes train. Each receiver/sender block () internally process the incoming message and transfers it then as the modified outgoing message . We represent in our model these messages by Bosons. The operator creates a Boson and the operator annihilate a Boson. This is in a close analogy to our treatment of small neurotransmitters. For simplicity, we relinquish in this chapter the "hat" label to mark an operator.
A simple model for a sender /receiver unit is a two states system, that rest in a ground state and is in an active state, when it sends a message. Such a system corresponds to a simple switch. We describe the dynamics of such a two-level system by two Fermi operators (ground state j, exited state; and its Hermitian conjugated , where and are both flip operators. The Bose operators fulfill the commutation rule, where the Fermi operators fulfill the anti-commutation principle ; the mixed commutator is zero, .
Classical Hamiltonians already can show different kinds of bifurcations and attractors [28], where these effects are caused by the dynamics of non-linear systems [27], [11]. In quantum mechanics similar effects also occur, however we want to point out that the goal of sub-chapter 4.4. emphasizes two additional points. First, in quantum mechanics the real and imaginary parts of the operators interact, where these interdependencies leads to a different bearings as in case of real variables. Second, a circuitry of quantum-based neurons is not comparable with a network of corresponding modules of quantum computers, since their internal states must be coherent. This request is in quantum biology not fulfilled. Even more, neurons are analog "devices" not digital modules, which operate with Q-bits.
4.4.2. The Interaction Hamiltonians
We write down the Hamiltonians in the interaction picture, where we can neglect the contributions of non-interacting particles. The equations of motion in this representation is identical to that one in the Heisenberg picture [14]. We specify the whole system by the following interaction Hamiltonians.
The Hamiltonian that represents the first sender, which sends the message , reads
(85)
where and denote the two corresponding flip operators. The coupling constant is .
The Hamiltonian that describes the whole chain of r/s pairs (except the first and last elements) is defined by
(86)
Here, we use the abbreviations: , . We assign the coupling constant to the receiver part of the combined module and the coupling constant corresponds to the sender part.
We denote the Hamiltonian of the final receiver of by
(87)
4.4.3. The Heisenberg Equations of Motion of the Hamiltonian
We get first relevant impressions of the kind of the equations of motion and their solutions, when we consider the Hamiltonian of the first receiver/sender pair
(88)
where we introduced the abbreviations: ,. In addition, we will use the following abbreviations: the Hermitian inversion operator and the two abridgements and .
We present the general equations of motion in the Heisenberg picture, what means that these equations include damping constants and fluctuating forces. Since we present the equations of motion for the expectation values for the corresponding operators, we can neglect the expectation values for the fluctuating forces, because they are zero. Thus, the equations of motion read as follows, where a point marks the temporal derivation and the ´s denote the damping constants:
(89)
(90)
(91)
(92)
(93)
(94)
For simplicity, we relinquish to explicitly mark the expectation values by angle brackets, e.g. , because we only work with the expectation values of the operators in this article. However, we must consider expectation values as complex numbers; therefore, we differentiate between the real and imaginary solutions of an expectation value.
Undamped Solutions
In a first step, we present the solutions of the equations (89) to (94) without damping coefficients, because we want to present their symmetry, later on we will describe how this symmetry is brakes when damping coefficients are included. Figure 3 depicts the phase portrait of the three real parts of variables (expectation values) without damping effects: Re, Re, Re; we observe an attractive limit cycle. We anticipate the observed symmetry of the orbit presented in figure 3, because all particular solutions of the three variables show a periodic behavior. This is why we skipped the separate presentations of these three trajectories.
Figure 3. Phase portrait of the three real variables and that are in the figure abridged called (,, ). The resulting orbit is a slightly curved attractive limit cycle that does not lie completely in a plane. The parameter values are: , .
We continue our presentation with the additional consideration of the imaginary parts. Supplementary, we minor change some initial conditions, with respect to the previous one in order to demonstrate collaterally the dependence of the solutions from the initial values. Figure 4 sketches the phase portrait of the real part and imaginary part of . This phase portrait approves the strong interrelation between both variables, representing a fractal manifold. We can envision the generation of this manifold, when we consider the dynamics of this construction. We start with a triangle that is open by an amount of a small , and then we shift the next unclosed triangle a little bit, with respect to the first one. Then, we permanently repeat this "open triangle cycle".
Figure 4. Phase portrait (fractal manifold) of the pair of real variables (Re, Im). The initial values are 1, 1), , The remaining parameters are still unchanged: .
The phase portrait shown in figure 5 represents a manifold that is a torus. This result confirms the suspicion (not proved by the corresponding Ljapunov coefficients) the orbits of the real () and imaginary part () are both quasi periodic.
Figure 5. Phase portrait of the pair (Re, ), which represents a torus. The parameter values are: 1), , ; , 1, .
Figure 6 shows the phase portrait of the triple of the real variables (, , .
Figure 6. Phase portrait of the triple of the real variables: Re, , Re. The parameter are: , ; , .
Figure 7 demonstrates the phase portrait of the corresponding triple of the imaginary variables (, , .
Figure 7. Phase portrait of the triple of the imaginary variables: Im, , Im. The parameter are: , ; ,
Damped Solutions
Now, we slightly turn on the damping constants: , where all other initial conditions and parameter values remain. Figure 8 demonstrates that the trajectory e.g. of Re after some oscillation as expected converges to the attractive fixed point 0, as we expect. Due to the strong similarity to , we disregard to present the trajectory of .
Figure 8. Trajectory of The parameter values are: .
Figure 9 displays the modified phase portrait of the triple (, , ), when a slight damping is turned on. The previous symmetry observed in figure 3 is broken; the trajectory starts at the symmetric limit cycle, shown on the top of this figure, but then abandons it very quickly and spirals down to the fixed point 0.
Figure 9. Phase portrait of the triple of the real variables: Re, Re, Re. The parameter values are: , .
Remarkable is the change of the phase portrait of both variables (, ) sketched in figure 10 compared with figure 4 (no damping). The periodicity of constructing open triangles is broken, but the principle to generate triangles (even if there are distorted) survives. We observe a chaotic like convergence to the fixed point (0, 0) by trajectories that are composed by triangle-like sub-orbits.
Figure 10. Phase portrait of the pair (, ). The initial values are: 1, 1), , The remaining parameters are still unchanged: .05.
The phase portrait of the triple of imaginary variables (, ). is shown in Figure 11.
Figure 11. Phase portrait of the triple of the imaginary parts (, ). The initial values are 1, 1), , The remaining parameters are still unchanged: .05.
4.4.4. Stochastic, Undamped Solutions of the Heisenberg Equations of Motion of the Hamiltonian
Up to now, we solved the equations (89) (94) without considering noise. In this sub-section, we present particular undamped solutions of the same equations; however, we include a separate stochastic variable that obeys a uniform distribution and lie in the domain 0.1, + 0.1]. Figure 12 demonstrate how noise change the fractal phase portrait of the pair (, that we previously have shown in figure 4.
Figure 12. Stochastic phase portrait (fractal image) of the pair of real variables (Re, Im). The initial values are: 1, 1), , The remaining parameters are still unchanged: .
Figure 13 presents the noisy phase portrait of the following three imaginary variables: (). We already have shown the noise-free solution in figure 7, where it is again illustrative to know the symmetric solution.
Figure 13. Stochastic phase portrait of the triple of the imaginary variables (I, Im). The initial values are: 1, 1, , , The remaining parameters are still unchanged. .
4.4.5. The Damped Heisenberg Equations of Motion of the Hamiltonian
The previous sub-chapter demonstrated already the dominant features of the equations of motion of a receiver/sender pair (figure 2). Therefore, we do not continue to present the complete description of all solutions of the remaining Hamiltonians listed in sub-chapter 4.4.2.
For convenience, we repeat the Hamiltonian of the first two (r/s) pairs of the chain given by (86)
(95)
The corresponding equations of motion are:
(96)
(97)
(98)
(99)
(100)
(101)
(102)
Figure 14 sketches the convergence of . This trajectory shows a previous oscillation before it reach the fixed point 0. We present this figures to demonstrate that the principal behavior of a variable with respect to a chain of two blocks is very similar to that one shown in figure 8.
Figure 14. Trajectory of the imaginary part of. The parameter values are: .
5. Conclusions
This article combines particular quantum field-based effects of transmitter-based processes of synaptic neurons, with typical thermodynamic characterizations of a grand canonical ensemble. In more details, this implies that we consistently model the neurotransmitters as particles of a non-relativistic quantum field. In the pre-phase of the exocytosis, particles are elements of a grand canonical ensemble that is in a thermodynamic equilibrium. The release of neurotransmitters destroys this equilibrium. Therefore, we observe a non-equilibrium phase transition that brakes the symmetry. Supplementary, we describe the released neurotransmitters as wave packets, where we calculate the effects of the corresponding quantum fluctuations.
A threshold separates the equilibrium region (no transmission phase) from the non-equilibrium region (transmission phase), where different probability distributions are applicable. Beneath the threshold, the Bose-Einstein statistics is relevant; above the threshold, the Poisson distribution is valid. We approximate the last mentioned distribution function by a Gaussian function, where we determine their variance by the mean value of the particle number that are in a defined k-state (momentum space). Beneficially, this approximation is also relevant at the threshold.
Well above the threshold, the Poisson distribution determines the correct interplay between secreted particles and their receivers, because it describes a steady flux of particles, with nearly constant densities that impinges on the receivers. Furthermore, we calculate the resulting trains of spikes on the presumption of Markov processes, where we differentiate between the spikes of one receiver and the spikes of many different receivers that simultaneously fire.
Ancillary, we characterize the two above-mentioned areas by two different information entropies. We use these two entropies to stipulate the quantities that are relevant in the two different regions.
Finally, we connect the individual neurons to a non-dendritic chain resp. a ring. We compose this neural circuitry by Bosons and Fermions to illustrate the quantum field-based interactions between these two different types of particles, which we represent by Hamiltonians that integrate both types of particles. Bosons are carrier of messages; Fermions represent information switches that can modify the incoming information before they transfer it to the neighbor neuron. Here, the important observation with respect to these interactions is the occurrence of chaotic solutions.
Acknowledgements
The European Brain Project (No. 720270, SGA2, SP 10) initiated the research leading to these results. We gratefully acknowledge the extensive, numerical support of Dr. V. Avrutin to this article. We also thanks Mr. J.-D. Korus for his very valuable support in preparing the symbolic computations of chapter four.
References