On the wavefunction collapse

Wavefunction collapse is usually seen as a discontinuous violation of the unitary evolution of a quantum system, caused by the observation. Moreover, the collapse appears to be nonlocal in a sense which seems at odds with General Relativity. In this article the possibility that the wavefunction evolves continuously and hopefully unitarily during the measurement process is analyzed. It is argued that such a solution has to be formulated using a time symmetric replacement of the initial value problem in Quantum Mechanics. Major difficulties in apparent conflict with unitary evolution are identified, but eventually its possibility is not completely ruled out. This interpretation is in a weakened sense both local and realistic, without contradicting Bell's theorem. Moreover, if it is true, it makes Quantum Mechanics consistent with General Relativity in the semiclassical framework.

1. Introduction 1.1. Unitary evolution and wavefunction collapse. The state of a quantum system is represented by a vector |ψ in a Hilbert space H. Its evolution is governed by the Schrödinger equation, (1) ı ∂ ∂t |ψ(t) =Ĥ(t)|ψ(t) |ψ(t a ) = |ψ a whereĤ(t) is the Hamiltonian, which is a Hermitian operator on H. If the quantum system is closed, thenĤ is time independent. The solutions of the Schrödinger equation have the form (2) |ψ(t b ) =Û (t b , t a )|ψ(t a ) whereÛ (t b , t a ) is a unitary operator on H, given by: Observables are represented by Hermitian operatorsÔ on the Hilbert space H. The outcome of a measurement is an eigenvalue λ ∈ R ofÔ, and the state of the observed system is an eigenstate |λ ofÔ corresponding to λ. The probability density that a quantum system previously in the state |ψ is found to be in the eigenstate |λ is, according to the Born rule, | λ|ψ | 2 . In particular, if |ψ represents a single particle, then according to the Born rule, the probability density that the particle is detected at a time t a at the position x a ∈ R 3 is where |x a is the eigenstate of the position operatorx corresponding to the position x a , so that x|x a is equal to the Dirac distribution δ(x − x a ). Of course, after the particle was detected at the time t a at the position x a , the probability to find it elsewhere vanishes, so the wavefunction changes -we say it collapsed at the position x a . The collapse specified by the Born rule suggests that the wavefunction is merely a tool for calculating the probabilities. On the other hand, Quantum Mechanics (QM) describes everything -particles, atoms, molecules, hence all material objects -as wavefunctions, so are they merely probabilistic waves?
The notion of discontinuous collapse has to face some problems. First, how can the Schrödinger equation, so successfully confirmed, be accommodated with the apparent wavefunction collapse? How can we reconcile a collapse taking place simultaneously everywhere in space, with Relativity, which does not accept the notion of absolute simultaneity?
On the other hand, trying to replace it with an effect resulting from dynamics also encounters severe difficulties, some of which will be explored here.
1.2. Motivation. In this article, I am interested in exploring the possibility that the dynamics of quantum systems, governed by the Schrödinger equation, can take place without discontinuous collapse, even during measurements. Hopefully we can find out that it can evolve by Schrödinger's equation, but maybe we need a general relativistic version, or at least an approximation like the non-linear Schrödinger-Newton equation [1] (non-linear modifications of the Schrödinger equation of the type studied by Weinberg are known to be signaling [2], but it is not excluded that other nonlinear approaches may not signal). The literature exploring the possibility that the Schrödinger-Newton equation introduces enough non-linearity so that it accounts for collapse in a way similar to the Ghirardi-Rimini-Weber approach [3] is very rich, see for example [4,5]. The approach presented here is different, in the sense that it tries to account for the collapse with the minimal possible departure from the unitary evolution, or at least from a continuous, albeit non-unitary or even non-linear evolution.
The idea that unitary evolution is not broken is central also in the many worlds interpretation [6,7,8,9], enhanced with the proposal that decoherence can resolve the measurement problem [10,11,12] (although there are some serious objections to this proposal [13,14,15]). But while in these approaches the unitary evolution of Schrödinger's equation is maintained at the multiverse level, where all branches are included, the collapse is still present at the level of a branch. Here I am interested whether it is possible to maintain unitary evolution in a single world, or at the branch level.
Given that in standard QM, briefly described in section §1.1, the statistical interpretation of the wavefunction given by Born is confirmed by observations, it is the correct description for all practical purposes. And this description suggests that quantum measurement leads to a discontinuous wavefunction collapse. However, it is still possible that the wavefunction which is inferred from the measurements is not the same as the real wavefunction, and this is one of the central themes of this article. The reasons which lead to enough flexibility to allow for this possibility are the following: (1) One cannot measure directly the wavefunction. What the measurements tell us is that the quantum state of the observed system is an eigenstate of the observable.
(2) Even this information is subject to inherent limitations given by the error-disturbance uncertainty relations [16,17,18]. According to the error-disturbance uncertainty relations, the more precise a measurement is, the more it disturbs the observed state. This means that the collection of results of measurements can only give an approximation of the state of the wavefunction. Given that the constraints imposed by the measurements to the wavefunction are more relaxed than it is usually assumed, a question becomes justified: Is it possible that the real wavefunction can fit the observations provided by measurements, without actually having to collapse in a discontinuous way? In standard QM, the wavefunction represents our knowledge about the observed system, and the probabilities of the possible outcomes of future measurements, so let us call the wavefunction representing probabilities epistemic wavefunction. What I propose is that one should consider the possibility that there is also a real, ontic wavefunction, which evolves continuously even during the collapse, and which is merely approximated by the epistemic wavefunction.
Our measurements give us the state representing the ontic wavefunction within the limits of error and disturbance. This entails a difference between the real, ontic wavefunction, and our statistical knowledge about it, represented by the epistemic wavefunction. I argue that the collapse we observe takes place only at the epistemic level, while it is still possible that the real wavefunction evolves continuously, following the Schrödinger equation, or at least a modified, perhaps non-unitary or even non-linear version.
This proposal is in line with other proposals that there are entities which represent "things", "beables". This idea is pursued for example in the de Broglie-Bohm theory [19,20,21,22] and other hidden-variable theories, and, in a unitary version, in 't Hooft's approach based on cellular automata [23]. But the departure of these proposals from Schrödinger's equation is significant. Here I will try to obtain a description still based on wavefunctions, and hopefully still governed by the Schrödinger equation. For example the atom is a "thing", which contains electrons whose states are very well described by Schrödinger's equation, or at least an approximation of it. Schrödinger himself originally saw the wavefunctions as real entities, but because entanglement makes them unlike fields or any other classical entities, he did not continue to pursue this possibility.
I will take into account major difficulties encountered by the proposal of a wavefunction which describes reality and not merely probabilities, and see if it survives at the end. If we can obtain a consistent picture, we will be entitled to call this wavefunction ontic (and still keep the epistemic, probabilistic approximation, which is the only one we can access by quantum measurements).
I expect that the information obtained from measurements, encoded in the epistemic wavefunction, describes to some degree also the ontic wavefunction. However, if we assume that the measurement also tells the ontic state of the observed system, then the conflict between dynamics and measurement can only be resolved by admitting a discontinuous collapse, either of the kind in the standard QM, or a spontaneous collapse, as in the GRW theory [3,24].
The tension between measurements and unitary evolution seems to lead with necessity to the collapse, so if in reality there is no discontinuous collapse, this can only be achieved if either the measurement or the dynamics is more flexible than we thought (or both). Let us first verify if our assumptions about measurement are true, and only change the dynamics if needed.
The purpose of this exploration is to find the possible conditions that any unitary approach to QM should satisfy, and see whether this possibility is still consistent.

The tension between quantum measurement and the initial conditions of the observed system
It was clear since the dawn of quantum mechanics, especially with von Neumann's formulation [25], that the state of the observed system appears to be in general a superposition of the possible results of a measurement, yet at the end of the measurement, the state turns out to be one of these possibilities. This seems to require a projection of the state in an eigenstate of the observable. However, it was suggested that by taking into account the environment, which includes the measurement apparatus, the evolution is still unitary. This idea was developed in the decoherence program [10,26,27]. Indeed, by accounting for the environment, the density matrix decoheres, so that the off-diagonal terms vanish. The diagonal terms are then interpreted as a statistical ensemble, so we cannot actually claim that the evolution is unitary, because evolving a pure state into a mixture means collapse. In such approaches, unitarity exists only when all decohered histories are taken into account.
In fact, any kind of attempt to give a purely unitary description of the wavefunction collapse at the branch level can work only for very special initial conditions of the observed system and the measurement apparatus. In [28] it was proven that in order to get a unitary description of the measurement process, the initial conditions of the observed system and those of the measurement apparatus have to belong to a set of zero measure of the full Hilbert space. The fact that unitary evolution is compatible with measurement only for special initial initial conditions, requiring therefore a fine tuning, seems to endanger the principle of causality. I will address this delicate problem in the sections §5, §6 and §7, and provide a more rigorous picture in §8.

Propagation of a photon from one place to another
Let us start with a simple case, of a photon going from one place to another. The Schrödinger equation has to take the wavefunction at the time t a from the place where it is emitted, and evolve it unitarily to another place, where it is detected at a later time t b . We already see that without a collapse the photon has to have fine tuned initial conditions at t a , so that at t b is found in a definite place. Let us now see how fine tuned the initial conditions have to be, or if even it is possible for a photon to satisfy both the initial and final conditions.
Suppose that a photon is emitted at the position x a at the time t a , and it is later, at the time t b > t a , detected at the position x b . To find the amplitude x b |ψ(t b ) , we need to apply the free particle Schrödinger equation to the initial state (6) |ψ(t a ) = |x a This makes the momentum completely undetermined, and the photon spreads like a spherical wave, preventing any possibility to be described by a wavefunction evolving unitarily from one point to another. But this is an idealization, because the photon is never emitted from a pointlike source. For example, if it is emitted by an atom, the wavefunction |ψ(t a ) is of the size of an atom. But even the wavefunctions of the electrons in the Hydrogen atom extend radially in the entire space. The amplitudes decay exponentially with the distance, but still they do not vanish. Because of this reason, both the emission and the detection of the photons are wrongfully represented as taking place in a definite position in standard QM. To be realistic we have to admit that what we call the position of emission or detection are actually some average positions, and the true state of a photon when emitted or absorbed is unknown. Using the eigenstates of the position to represent them is an approximation. A better approximation would be a Gaussian function centered at x a and having the width equal to the radius of the atom. What if we consider rather than precise locations, more extended regions? Suppose now that immediately after the emission, at the time t a , the photon passes through an extended but bounded region of space A ⊂ R 3 . Given this loosening of the initial condition, could it be possible that the Schrödinger equation itself makes the wavefunction of the photon evolve so that at a time t b > t a it passes through a bounded region B ⊂ R 3 (for instance where is detected)? Unfortunately, no matter how large we allow the regions A and B to be, as long as they are bounded, it is still impossible for a free particle to be confined at a time t a in the region A, and later at t b to region B without breaking the unitary evolution. Suppose that at t a the support of the wavefunction ψ(t a ) is included in A, supp(|ψ(t a ) ) ⊆ A. Then, its Fourier transform will be an entire function (a complex function holomorphic over the entire complex domain), so its support will cover the entire domain of wavelengths. This means that no matter how large is A, the wavefunction |ψ(t a ) will be a superposition of plane wavefunctions of almost all possible momenta. So immediately after t a the wavefunction will spread in the entire space, and there will be no way that at a later time t b , supp(|ψ(t b ) ) ⊆ B.
From this point of view, the wavefunction collapse seems like a clean solution to this problem, because it allows the wavefunction extended in the entire space to become suddenly localized in a small region.
Consider now a Hydrogen atom in a water molecule in a glass of water (which is a bounded region). The Hydrogen atom will remain in the glass for long time. If the collapse is the explanation of localization, then in this case the wavefunction of the atom has to collapse all the time, to remain in the glass. An alternative way is to admit that it extends in the entire space, but it is "more localized" in a certain position in the glass. This weakening of the condition to be localized at a definite position allows it to remain for long time in the glass, without the need of collapsing all the time.
Gaussian wavepackets, despite saturating the uncertainty relation and remaining Gaussian in the absence of interaction, spread in space. This would make impossible for a small detector on earth to detect without collapse a photon emitted by an atom in a distant galaxy. A more appropriate solution would require using nonspreading wavepackets, so that if they were localized in a place at t a , they will be localized also at t b . Fortunately, such solutions are known for Dirac, Klein-Gordon and Schrödinger equations [29,30,31,32,33,34,35,36]. Moreover, such solutions are even able to reproduce the two-slit interference, and therefore the Born rule for this case [37].
This analysis shows that it is not accurate to consider the measurement of position as finding the wavefunction to be precisely localized at a definite position. Rather, "most of the wavefunction" is localized in a small region around that position. The epistemic wavefunction is in this case |x , because this is what we think we know about the particle detected at x, but the ontic wavefunction is something completely different, extended in the entire space, but concentrated around x, which would be better approximated by a such a solitonic wavefunction which is "mostly localized" around x.
Therefore, unitary evolution of photons can accommodate the fact that the photon travels from the place where it is emitted to the place where it is absorbed without breaking unitary evolution. Clearly its wavefunction has to be very special for this, the initial conditions have to be fine tuned to also satisfy the final conditions, as we already know from the discussion in section §2.
Note that the notion of "mostly localized" does not refer to the probabilities, but to a physical wavefunction whose existence I propose here, which is merely approximated by the measurements. The probabilities apply to our knowledge of the wavefunction, while the localization I am proposing here refers to the physical, ontic wavefunction, whose possibility of existence is explored here.
This only shows that the possibility of this taking place unitarily exists, but it does not explain why the wavefunction takes such a special form. Perhaps a deeper understanding of particles, which still eludes us, will provide an explanation.

The "unitary collapse" condition
In order to be rigorous, we would have to define what "mostly localized" is. We can define the degree of localization of a wavefunction |ψ inside a region A ⊂ R 3 as Maybe it is more appropriate to use a more elaborate definition, for instance using standard deviation. Standard deviation is natural to be used in the case of Gaussian wavepackets, because we can use for example the width of the packet. However, for simplicity we can consider equation (7). Let us fix a value 0 < Λ ≤ 1 and write if Λ A (|ψ(t) ) ≥ Λ. We say that a particle whose wavefunction is |ψ is in the region A at the time t if |ψ (A, t). Thus, I propose the following unitary collapse condition: In the real world, the wavefunction evolves unitarily so that at the times t a and t b it passes through the regions A and B.
In other words, |ψ has to simultaneously satisfy the following three conditions: This condition does not contradict the Schrödinger equation, and in fact proposes that it remains true even in the cases when we can only think that it is violated by a discontinuous collapse. For this to be true, it is necessary that events like emission and absorption to be weakly localized, so that there is always a solution of the Schrödinger equation which satisfies them.
The "unitary collapse" condition can be generalized to more particles, and to more places where the particles have to be found. Also, it can be generalized to conditions that are closed not to a particular eigenstate of the position, but of any other observable. This generalization will be made in section §8.

Delayed initial conditions
The dependence on the final conditions seems retrocausal, because the initial conditions of the observed system have to be tuned precisely so that the wavefunction becomes localized when its position is detected. But this should not come as a surprise, because we already know that the state of the wavefunction prior to the measurement is constrained by the experimental setup. This is unavoidable in any interpretation in which the outcomes of the measurements are encoded in one way or another in the initial conditions [38,39,40]. A unitary evolution attempt to describe the measurement makes no exception [28].
The kind of special initial conditions which allow unitary evolution to be compatible with measurements, proven to be required in [28] and used in section §3, can be interpreted as superdeterminism (see for example [41]), or retrocausality. This is a delicate problem, because seems to be a threat to the principle of causality. In the following, I will discuss some proposals, and argue that it will not lead to breaking of causality.
This apparently retrocausal feature of quantum mechanics is actually often encountered and discussed in the literature. It is at the origin of the transactional [42,43] and the time symmetric [44,45,46,47,48] approaches to QM. Several proposals to deal with this issue are known, for example [49,50,51,52].
Here I will argue that the apparent retrocausality can exist in the proposed model without breaking the principle of causality. I will discuss two equivalent pictures, one which is temporal, and another one which is timeless, based on the block view.
The temporal interpretation of the apparent retrocausality is based on delayed initial conditions [53,54,55], in the following sense. The initial conditions of a classical system are usually not restricted -the system can start in any initial state, and the dynamics will work without problems. The initial state can be measured so that we get the complete information. However, the initial conditions of a quantum system can be seen as not determined until the complete information can be extracted from measurements. The complete information is hidden by the very principles of QM. Therefore, the information usually contained in the initial conditions is distributed in spacetime at the various places where quantum measurements and observations are performed, and no matter how many measurements we perform, we will never find the complete wavefunction of the world. What we can have is a set of possible solutions of the Schrödinger equation, which satisfy the observations in a global and self-consistent manner. This set of possible solutions is reduced in time, as new measurements are performed, so that in time we accumulate more and more knowledge about the quantum state. This is not merely a collection of information about the quantum state, because different choices of the observables lead to different possible solutions. This picture does not violate causality, because it cannot be used to change the past already recorded by observations. The reason is that after each observation we keep only the solutions compatible with the outcome of that observation. So no contradiction is allowed. Of course, the big question is whether the set of solution satisfying all constraints due to observations is always non-empty, no matter how many observations we make. This problem is addressed in this article, by using the fact that there is a trade-off between error and disturbance. In particular, this point is addressed in section §10 for a specific example.
The timeless picture, which is equivalent to the delayed initial conditions picture, will be discussed in the following.

Spacetime locality
Consider Bohm's version of the Einstein-Podolsky-Rosen experiment (EPR-B) [56,39]. This version involves the entanglement of the spin states of two particles. The analysis of a way by which the EPR-B experiment can take place by unitary evolution was discussed in [57,53,54].
The EPR-B experiment involves the decay of a composite particle which is in a singlet state . The two particles labeled by A and B resulting from the decay arrive at Alice and Bob. Alice measures the spin of the particle A along a direction in space, and Bob measures the spin of B. Because both of them find definite and separate outcomes for their experiments, it follows that if unitary evolution is maintained, the two particles arrived at them as separate states [57]. If we apply backwards in time the evolution equation, we can conclude that after the decay the particles had separate states. This means that between the decay and the measurement both particles behaved locally. Therefore, the correlations between the values obtained by Alice and Bob are enforced locally, through the histories of the two particles. We find again that the states of the particles immediately after the emission had to be fine tuned so that Alice and Bob find them in the correct states.
An experiment verifying what happens in the EPR-B experiment with the weak values of the spin between the emission and the detection of the particles was explored in [58]. The conclusion of the article was that what appears to be nonlocal in space turns out to be perfectly local in spacetime. Although in [58] the result is interpreted in terms of the two-state vector formalism (see section §12), it is also consistent with other interpretations [49,50,52]. In addition, it supports the proposal of this article, that the evolution between the emission and the detection is unitary, and there is no discontinuous collapse. We can consider the processes taking place during the EPR-B experiment as being local in the sense that the particles are described by local solutions of the Schrödinger equation. This kind of spacetime locality is not what we usually expect when we speak about locality, because it depends on the final conditions imposed by the experimental setup. The solutions are local in the sense that they obey partial differential equations on spacetime, but they are also subject to boundary conditions which are global and impose the apparent (space) nonlocality like that from Bell's theorem.
It is normally considered that the Schrödinger equation predicts that the particles are entangled after the decay. However, by requiring the solution to satisfy the final conditions, the particles turn out to be separated right after the decay. Since the entire past history of the particles has to satisfy the final conditions, the projection has to be done to the entire life span of the particles, that is, it applies to the past history. This kind of projection which applies to the entire history does not introduce a discontinuity or a violation of the Schrödinger equation.

Global consistency condition
The solutions satisfying both the initial conditions and the final ones are local, as solutions of the Schrödinger equation, but they are also subject to global constraints, given by the initial and final constraints, resulting from the preparation and the measurement.
The idea to impose global constraints to local solution is not unprecedented: Schrödinger derived the discrete energy spectrum of the electron in the atom by imposing boundary conditions on the sphere at infinity [59]. So, the solutions are local, but among all local solutions, we accept as physical the ones that are consistent everywhere, including at infinity. More generally, they also have to remain consistent in the future. Such conditions are imposed by future measurements, so in order to ensure consistency, we keep only the solutions of the Schrödinger equation which are consistent and remain consistent anywhere in spacetime.
The consistency of the solutions with future measurements implies that the state of the system before measurement depends on the observables we will choose to measure in the future [28], and this has the unpleasant appearance of a conspiracy. This can be interpreted in a less striking way, if we appeal to the block world view. The block world view of the universe is mostly known from Einstein's relativity, but it is also useful in Galilean relativity. If we consider the solutions not given by complete initial conditions at some point in the past, but as a combination of delayed initial conditions imposed at various points in spacetime, then the block world view provides a more natural picture [60,54,55].
Other proposals in the same spirit are known, see for example the toy model of using the block world view in [51]. The resemblance with the toy model proposed in [51] consists in the fact that both proposals require consistency between conditions imposed at different places and times, in a block world. The difference is that, while the toy model is a simple graph (nevertheless having the desired features of retrocausality), the model proposed in this article is quantum, and is governed by the Schrödinger equation, with minimal differences from standard QM (namely, the unitary account of the apparent collapse).
Another way to see this block world picture is as a sheaf of local solutions, which can be combined only in certain ways to obtain a globally consistent solution [54]. Thus, quantum reality is like a puzzle which can be solved only in consistent ways [55].
Even if the block world view may be satisfactory for some aspects of the problem, when we think of the same phenomena in terms of time evolution, the conspiracy and the apparent retrocausality return. In sections §8- §9 I will present a more rigorous picture which will show that this does not imply a violation of causality, because it does not change the past, it only determines the parts of the past which were not already determined.

The wavefunction-events picture
We are in position to provide a picture of a system or of the Universe, exclusively based on the constraints imposed to the wavefunction by various events like emission, detection in a particular (always approximate) position or eigenstate of another observable, passing through slits etc.
We denote by M the spacetime. It will be useful to define on M a time coordinate t : M → R which foliates it in spacelike surfaces of constant time, M = T × S = t∈T S t , where T is an interval in R, and S t = {t} × S, S being the physical space. The following can be applied also to relativistic theories, because it will not break the Lorentz invariance.
Let H be the total Hilbert space of the universe, which may contain the Fock spaces of all particles, or any suitable space needed to represent the entire universe and all interactions. We assume that the Schrödinger equation has solutions of the form ψ : T → H. We denote the space of solutions of equation (9)  Equivalently, we can take s to be a subset of the Hilbert space of the total system s ⊂ H, which satisfies the condition that for any |ψ ∈ H and any α ∈ C \ {0}, |ψ ∈ s ⇔ α|ψ ∈ s. This condition just ensures that the set s contains rays from the Hilbert space.
Example 8.2. For instance, the event that the wavefunction of a particle |ψ is localized at the time t in the region A, hence satisfies (8), is ε = (t, s), where s is the set s = {|ψ ∈ H 1 ||ψ (A, t)} ⊗ H 2 , where H 1 is the Hilbert space of the particle, and H 2 is the Hilbert space of the rest of the universe, hence H = H 1 ⊗ H 2 . Definition 8.3. We call an event as in the example 8.2 a spacelike event. Here, the term "spacelike" reflects the fact that the points in the region A have the same time, at least in a reference frame, similarly to the case in relativity.
We see from Example 8.2 that although the definition of the event 8.1 refers to the entire Hilbert space, the event itself can be about any subsystem. In this example it was about localization in space, hence around a position eigenstate, but it can be as well about the localization around any state vector, which is an eigenstate of another observable.
We could have taken the definition of an event to be such that s is a Hilbert subspace. This would have allowed us to use projectors. There is a reason why I prefer a general subset and not a Hilbert subspace: not any condition can be expressed by a projector, or a Hilbert subspace. For instance, the set of functions satisfying the condition (8) does not form a vector space. Hence, using Hilbert spaces instead of sets in Definition 8.1 would restrict too much the conditions, making the notion of event too simple and unrealistic.
Any position measurement of a particle happens around a particular position and time, but in general that region is extended in spacetime. The event that a photon passes through a slit cannot be a spacelike event as in Example 8.2, because the slit is also extended in time, and the photon can pass through the slit at various times. For this reason, we need to consider regions A ⊂ M which may also extend in time. But a subset s from an event ε = (t, s) being a general subset of the projective Hilbert space, allows this situation too. This is because the condition |ψ(t) ∈ s is equivalent to the condition |ψ(t ) ∈Û (t , t)s. In fact, the condition can be seen as selecting a subset of solutions defined for all times, ψ ∈ , where ⊂ H is a subset of the space of solutions H , and not of the Hilbert space H at a particular time t. This justifies the alternative formulation of Definition 8.1: For any t the timeless events of the form ⊂ P(H ) are in one-to-one correspondence with the events of the form ε = (t, s). When time has to be contained explicitly, we will use events of the form ε = (t, s) as in Definition 8.1. This will be in most cases, because the events will be ordered in time. But in reality they can as well be defined in a time independent way, as subsets ⊂ P(H ), as in Definition 8.4. Let us call the way to represent events from Definition 8.1 temporal picture, and that from Definition 8.4 the timeless picture of events.
At every time there is a collection of possible wavefunctions satisfying all events that already happened, just by the unitary evolution given by the Schrödinger equation. Every new event, say every new measurement, only eliminates some of these solutions of the Schrödinger equation, but this elimination applies on the entire time range for each solution. By this, it reduces the set of possible solutions from H to the subset of solutions satisfying also the new events, and giving the appearance of a collapse.
A registry of events is a collection of events or, if the time is specified for each event, where 2 X is the usual notation for the collection of all subsets of a set X. We denote by H (E) the set of the solutions of the Schrödinger equation satisfying the registry of events E. It is trivial to see that the set of solutions from H satisfying the events in a registry E is, in terms of events of the form (t, s), is given for any chosen initial time t 0 by (12) H In most reasonable cases the condition defining an event is about localization in space: a particle was emitted or absorbed or passed through a certain region of spacetime. Also, in the majority of situations the region can be approximated by a spacelike region, hence the event can be a spacelike event as in Example 8.2. It is not mandatory for the region A to be connected. If A is the union of two or more connected components, then the particle has two or more alternatives, and it will use all of them. This is the case of the two-slit experiment: the two slits can be seen as two connected components of one single region A.
While both of the definitions of events 8.1 and 8.4 refer to the Hilbert space of the entire world, the notion of event can be also about subsystems or particles, as already pointed out in Example 8.2. Simply take s to be of the form P(s 1 ⊗ H 2 ), where s 1 ∈ H 1 is a subset of the Hilbert space H 1 of a subsystem, and H 2 is the Hilbert space of the rest of the world. This allows us to refer when defining an event specifically to subsystems. It works well also for entangled systems.
What about interactions involving some particles as input, and some as output? Such an event can be described in terms of spacelike events. For example, if the region A is bounded by the times t 1 and t 2 , we can describe it by spacelike events at t 1 ensuring that the input particles enter the region, and other spacelike events at t 2 , for the output particles, which ensure that they leave it. We can also add the condition that a particle was annihilated in the region A and therefore does not leave it, or that was created and therefore is not among the input particles, simply by imposing that it does not pass through S t 2 , respectively S t 1 .
We can thus use events to describe and construct all sorts of quantum phenomena and experiments. 9. The history and the wavefunction collapse We derive some properties.
Lemma 9.1. For two registries of events E 1 and E 2 , Proof. We apply Definition 8.4, equation (13), and the properties of operations with sets.
(1) From equation (13), (2) From equation (13), (3) If E 1 ⊆ E 2 , then any solution ψ ∈ H (E 2 ) satisfies the events of E 1 , hence ψ ∈ H (E 1 ). Or, (16) H Consider the spacetime M, and a registry of events E ⊂ T × 2 P(H) . From the point of view of a time coordinate, the spacetime M is foliated, and the events can be ordered by time. Some of them may be simultaneous with respect to that foliation. For any time t, we define the subregistry (17) E(t) = {(t , s) ∈ E|t ≤ t} of events that already passed at the time t. For a sequence of times . . . < t −1 < t 0 < t 1 . . ., there is a sequence of event sets (18) .
where E i := E(t i ), which describes the history of the system. The corresponding sets of solutions of the Schrödinger equations also form a sequence (see fig. 1) Therefore, after each event the set of solutions satisfying the already passed events is reduced to a subset, similarly to a projection. This takes the appearance of the wavefunction collapse, but the reduction is not real. Rather, we eliminate the solutions that do not correspond to the observations encoded in the events.
Every new event adds new constraints, acting as delayed conditions on the set of admissible solutions of the Schrödinger equation. The global consistency condition is thus satisfied by the reduced set of solutions.

Measurements as events
Is it possible to approximate well enough a history based on unitary evolution interrupted by collapse events (as in von Neumann's formulation) with a history based on unitary evolution only, as in the wavefunction-events picture? In this case, which one is closer to reality and which is an approximation?
Let H 1 be the Hilbert space representing the observed subsystem, and H 2 the Hilbert space of the rest of the world. A measurement in von Neumann's scheme is accompanied by events saying that certain outcome was obtained. So the possible events corresponding to the measurement of the subsystem represented by the Hilbert space H 1 with the observable O 1 having as eigenspaces H 1,O 1 ,λ for each eigenvalue λ are of the form ε(t, If we refer only to the subsystem H 1 , then two successive incompatible measurements correspond to inconsistent events, in the sense that they cannot be satisfied by the same solution. This requires a discontinuity, so that we have a solution which satisfies the first event, and another one satisfying the second one. Hence the collapse. If we include the "environment" H 2 , this may contain interactions which change the first solution into the second in a continuous way, and so that for the entire system the evolution is unitary. This is possible at least in some cases, as we can see from the example of the particle moving from one place to another discussed in section §3. But for this to be possible, the initial conditions of both the observed system H 1 and the rest of the world H 2 have to be very special [28]. Equally important, error is necessary to allow two successive measurements to be compatible. The example of the photon moving between two locations discussed in section §3 has both special initial conditions, and error, because it is never truly an eigenstate of the position operator. Can this work for other observables too? Consider for example a particle of spin 1 2 . Measuring the spin at t a along the x axis, represented by the observable denoted by S x , can result in two possible outcomes, |↑ x and |↓ x . Suppose that the outcome is |↑ x . If the particle evolves freely, a subsequent measurement at t b > t a along the z direction results in the two possible outcomes |↑ z and |↓ z , each of them with probability 1 2 , since |↑ x = √ 2 2 (|↑ z + |↓ z ). How is it possible that the particle evolves freely from an eigenstate of the observable S x to an eigenstate of S z ?
In the following I will argue that this may be possible even under the assumption of unitary evolution. First, it is clear that measurements, in particular spin measurements, are subject to errors. The experimental setup is such that the position of the detected particles, from which we can infer the spin, are subject to errors. While they can be separated at will in order to exponentially reduce the overlap, there is still error in the orientation of the magnetic field, and approximation of the alignment of the magnetic moment. Like any measurements, we do not actually detect unequivocally the eigenstates, because real measurements are only approximately projective, and actually correspond to POVMs which allow in fact, with a small but nonzero probability, the occurrence of any possible outcome. In addition, in the case of successive measurements of the spin along different axes, the trade-off between error and disturbance can be such that the conditions imposed by both measurements are satisfied. Also, the magnetic fields of the two Stern-Gerlach devices utilized to measure the spin rotate the spin orientation, and this can be such that the result |↑ z or |↓ z is obtained, even if previously the spin was detected to be along a different axis. Moreover, one should not forget that the measurement device itself is a quantum system, but we do not know its complete quantum state. This means that its initial conditions are not completely fixed by our observations, and they introduce some freedom, which may allow it to interact with the observed system such that it disturbs it to lead it into one of the possible eigenstates [57,54]. All these factors provide enough freedom from the constraints, so that the possibility that unitary evolution is compatible with the outcomes of two successive and non-commuting measurements cannot be easily excluded. To completely reject the possibility of unitary evolution, one should perform successive spin measurements in which we eliminate all possible loopholes which can lead to the necessary disturbance which allows it. The necessity of special initial conditions of the observed system and the measurement device is visible if we consider the possibility to delay the choice of the second observable, for instance by randomly choosing between the observables S x and S y . This is because the way the magnetic field of the first Stern-Gerlach device disturbs the observed system after the first measurement has to depend on the orientation of the Stern-Gerlach device performing the second measurement: if the second observable is again S x , then there should not be a disturbance, while if it is S z , the disturbance should be maximal.
For this reason, we should replace the events of the form "the observed system is in an eigenspace of this observable" with some more flexible approximations. We would want to have a distance which, when under certain value, tells us that a solution is close enough to a certain subspace of the Hilbert space to be considered the state of the observed system when measured. This approximation should be small enough to be within the experimental error, but large enough to allow, together with the disturbance, for a unitary solution. Up to this point, we do not have a proof that a unitary solution, or at least a continuous one, is always ensured.

Unitary histories
It is often claimed that the evolution is unitary even during measurements, in the many worlds interpretation (MWI) [6,8,9], the consistent histories interpretation [61,62,63], and in the decoherence program [10,26,27]. In fact, in all these interpretations the unitary evolution is recovered only when considering all the worlds/branches/histories together. At the level of each branch, there is always a collapse. In the decoherence program, the density matrix becomes diagonal, and then it is interpreted as a statistical ensemble, so the measurement reveals that the system was in one of the eigenstates. But if we evolve backwards in time the eigenstate obtained by measurement, we find that the initial state was different than what we considered it to be before diagonalizing the density matrix. Interpreting the diagonalized density matrix as representing a statistical ensemble would solve the problem by unitary evolution, except that, if the chosen observable would have been different, the decomposition of the density matrix as a statistical ensemble would have been completely different. So we have to choose: either admit that there is a discontinuous collapse, or that the initial conditions of the observed system depend on what we will choose to measure [28]. If we stick with unitarity at the level of each branch, we have to admit the solution proposed in this paper.
The solution proposed in section §8 may be seen as being based on branches that decohere, or worlds that split, but in a different way. Rather than having a unique past that splits in many alternative futures, the split happens for the entire history, as if the past history was precisely the one leading to what we observe in the present. The name relative state interpretation may be more appropriate for this interpretation than for the usual MWI. In the wavefunction-events picture, consider a registry E. A new measurement results in its extension, but the extension depends on the outcome, for example, on the place where a particle was detected. Suppose that the alternatives are described by a collection of events 1 , 2 , etc. Then, each alternative event i leads to an alternative extension E i = E ∪ { i }. Consequently, the associated set H (E) splits in the alternatives H (E i ). In this sense, the unitary interpretation proposed here can be seen as a many worlds interpretation in which the evolution is actually unitarity for every possible history or branch.

Time symmetry and retrocausality
Let us reverse the time in the conditions from section §4. We define |ψ (t) := |ψ(−t) and U (−t a , −t b ) :=Û † (t b , t a ). Then −t b < −t a , and (1) The initial constraint becomes |ψ (B, −t b ), (2) the final constraint becomes |ψ (A, −t a ), Hence the proposed description is manifestly time symmetric. To ensure that a wavefunction evolves so that subsequently it becomes localized, its initial conditions have to anticipate the experimental setup from the future [28]. There are other cases where this situation was accepted. For example, the absorber theory by Wheeler and Feynman proposed a similar feature in electrodynamics [64,65]. The Lagrangian formulation of QM is also time symmetric, and led to the sum over histories approach [66,67]. Another formulation based on Lagrangian, which is also time symmetric, was proposed in [68]. The transactional interpretation of Quantum Mechanics relies on a transaction between past and future [42,43].
The two-state vector formalism [44,45,46,47,48] also adopts a time symmetric description of quantum mechanics based on a state vector evolving towards the future, and another one towards the past. In combination with weak measurements, this approach turned out to be a powerful tool in identifying and elucidating various quantum paradoxes. In [58] is presented a version of the EPR-B experiment which shows how future strong measurements appear to affect the results of weak measurements performed in the past. Moreover, this approach provides important clarifications on the measurement problem and the wavefunction collapse problem, and reveals how time reversibility is attainable, under specific conditions [69].
Another approach based on unitary evolution is the cellular automaton interpretation of QM, proposed by 't Hooft [70,23,71], and also leads to apparent conspiracies between the initial conditions.
A quantum measurement acts like a delayed completion of the initial conditions of the observed system. This appears retrocausal, but cannot be used to change the past, only to decide on the values that were not yet observed, or that were hidden. This is similar to the impossibility to use nonlocality to send signals faster than light. Basically, each measurement adds a new event, which merely reduces the Hilbert space of the wavefunctions to a subspace. This reduction is, as we have seen, not a change of the solutions, neither a discontinuous collapse, but it is rather similar to an increase in information about the observed systems.
To get a less dramatic picture of this apparent backward causality, we can think at the fourdimensional spacetime as already existing, together with the physical states. The solutions of the Schrödinger equation have to be self-consistent not only at a local level, but also globally. This global consistency condition should be imposed also to act in spacetime, not only in space, to remove the inconsistent solutions. The remaining ones appear nonlocal, but this is now just an expression of global consistency [54,55].

Possible implications to quantization of gravity
The semiclassical Einstein equation is (20) G ab + Λg ab = 8πG c 4 T ab where G is Newton's constant, c the speed of light, Λ the cosmological constant, and G ab Einstein's tensor. The expectation value of the stress-energy tensor T ab can be taken T ab = ψ|T ab |ψ [72,73]. Other formulations employ instead of the wavefunction |ψ the density matrix or a C * -algebra state [74].
The main arguments against semi-classical gravity come from the impossibility, or at least difficulty to accommodate the wavefunction collapse with the Einstein equation. If we take into account the backreaction, spacetime curvature has to depend on the way matter is distributed, and conversely. But a collapse would mean a discontinuous change in the curvature, which apparently can be used to send signals faster than light [75]. Also, a collapse would break the conservation of the stress-energy tensor. In [76] experiments involving superpositions of macroscopically distinct states, having masses whose gravitational field could be measured, were reported. The gravitational field was found to be correlated only with the eigenstate which was detected. According to the authors, this refuted semiclassical gravity, but in the context of the many worlds interpretation of QM [6,7]. The assumption which was refuted was that if |ψ evolves unitarily in the multiverse, gravity should correspond to the superposition |ψ , and not to a particular eigenstate |ψ λ obtained after collapse. Since the gravitational field was found to correspond to one eigenstate and not all states in the superpostion, semiclassical gravity in the context of MWI was refuted.
But if reality is accurately described by a wavefunction which evolves unitarily, or at least continuously, without a discontinuous collapse, then these problems no longer appear, and the semiclassical Einstein equation (20) connects consistently General Relativity and Quantum Mechanics. This requires, of course, that in equation (20) the wavefunction |ψ is the ontic, and not an epistemic or statistical one. Making QM and GR compatible this way does not mean that the world obeys semiclassical gravity, only that, if it is necessary to quantize gravity, it is for other reasons.

Open problems
Here I argued for the possibility to avoid a discontinuous collapse by maintaining unitary evolution, or at least continuity, during the apparent collapse. If such a solution can be proven to be consistent, this would resolve the conflict between collapse and dynamics, but it will also make QM and GR consistent semiclassically. This possibility justifies the research in this direction. However, severe difficulties have to be resolved. The first problem is to find what ontic states correspond to each state obtained from measurement. In other words, for each epistemic state resulting from a measurement, what ontic states can lead to the resulting outcome. For example, in section §3 it was shown that the photon cannot be localized at a point, or even in a compact region, if we want to maintain unitarity. This has to be done so that it can apply to all possible observables. Maybe this approach will fail already at this step by turning out that it is not flexible enough to describe the apparent collapse. If it works, the next necessary step is to deduce the Born rule from the correspondence between epistemic and ontic states, extending the results obtained in [37] to all cases. Can we find experimental evidence supporting the unitary or at least continuous, rather than the discontinuous collapse based QM? Can we find rigorous theoretical evidence, for example from the consistency between QM and GR required by semiclassical gravity? At least we have seen that it is possible to save unitarity, and this possibility worth being explored, for its implications to the foundations of QM and of semiclassical gravity.