Commutative Prospect Theory and Stopped Behavioral Processes for Fair Gambles

We augment Tversky and Khaneman (1992) (“TK92”) Cumulative Prospect Theory (“CPT”) function space with a sample space for “states of nature”, and depict a commutative map of behavior on the augmented space. In particular, we use a homotopy lifting property to mimic behavioral stochastic processes arising from deformation of stochastic choice into outcome. A psychological distance metric (in the class of Dudley-Talagrand inequalities) popularized by Norman (1968); Nosofsky and Palmeri (1997), for stochastic learning, was used to characterize stopping times for behavioral processes. In which case, for a class of nonseparable space-time probability density functions, based on psychological distance, and independently proposed by Baucells and Heukamp (2009), we find that behavioral processes are uniformly stopped before the goal of fair gamble is attained. Further, we find that when faced with a fair gamble, agents exhibit submartingale [supermartingale] behavior, subjectively, under CPT probability weighting scheme. We show that even when agents’ have classic von Neuman-Morgenstern preferences over probability distribution, and know that the gamble is a martingale, they exhibit probability weighting to compensate for probability leakage arising from the their stopped behavioral process.


Introduction
This paper is motivated by the following statements from (Tversky and Khaneman, 1992, pg. 300) hereinafter referenced as ("TK92"): Let S be a finite set of states of nature; subsets of S are called events.It is assumed that exactly one state obtains, which is unknown to the decision maker.Let X be a set of consequences also called outcomes.* * * * * * * * * * An uncertain prospect f is a function from S into X that assigns to each state s ∈ S a consequence f (s) = x in X.To define the cumulative functional, we arrange the outcomes of each prospect in increasing order.A prospect f is then represented as a sequence of pairs (x i , A i ) which yields x i if A i occurs . . . .* * * * * * * * * * Cumulative prospect theory [("CPT")] asserts that there exists a strictly increasing value function v : X → Re, satisfying v(x 0 ) = v(0) = 0), . .

. [Emphasis added].
At a more abstract level, (Luce and Narens, 2008, pg. 1) characterized problems of this type thusly: Most mathematical sciences rest upon quantitative models, and the theory of measurement is devoted to making explicit the qualitative assumptions that underlie them.This is accomplished by first stating the qualitative assumptions empirical laws of the most elementary sort in axiomatic form, and then showing that there are structure preserving mappings, often but not always isomorphisms, from the qualitative structure into a quantitative one.The set of such mappings forms what is called a scale of measurement.[Emphasis added].
Equally important is the following (Nosofsky, 1997, pg. 347) quote of Luce: ". . .we surely do not understand a choice process very thoroughly until we can account for the time required for it to be carried out . . .".
Even though TK92 did not use the words and phrase "topological lifting", the composite mapping they describe-choice function from state space to outcome space, and value function from outcome space to the reals-is, by definition, a topological lifting of a direct map from state space to the reals.Additionally, TK92 did not augment their function space with the prerequisite map from "states of nature", i.e., a sample space, to state space-which gives rise to stochastic choice on state space.Nonetheless, "occurence of an event effects a change of state", (Norman, 1968, pg. 61).In fact, review of the literature on prospect theory failed to find explicit analysis of this commutative prospect space.Thus, this paper fills that void by augmenting TK92 CPT function space with mappings from "states of nature", i.e., a sample space, to state space.By so doing we induce a rich topological space, and show how behavioral stochastic processes are generated from microfoundations of the augmented space 1 .Additionally, in accord with Luce's surmise about choice and time, we introduce behavior mimicking ε-homotopy sample paths for deformations of stochastic choice into outcome.We show that the sample paths are stopped behavioral processes, and that for fair lotteries they are local martingales 2 under CPT probability weighting scheme.
1 Our methodology is distinguished from that popularized in the literature on stochastic models of learning.See Wickens (1982).A qualitative paper by Steinbacher (2009) used "buzz words" and "catch phrases" to discuss related issues, but did not introduce a parametrized model of behavioral stochastic process.
2 Tangentially related papers by Nosofsky (1997) and Nosofsky and Palmeri (1997) deal with subjects' retrieval time from memory for objects that are similar to exemplars.Even though a random walk model fitted their experimental data, their approach is qualitatively different from that in this paper.Recently, Lindquist and McKeague (2009) proposed a logit model with Brownian-like predictors that may be closest to ours.However, their model was adaptive and based on observations in fMRI and other medical experiments.Our model is normative in the context of the augmented CPT function space In section 2 we introduce basic definitions, and the commutative map of prospect theory's function space including its sample space augment.In subsection 3.1 we introduce the main result of a behavioral homotopic lifting which serves as the foundation for construction of a behavioral stochastic process in subsection 3.2.In subsection 3.2 we show how behavioral stochastic processes are uniformly stopped just short of reaching a goal in space-time.In section 4 we apply our theory to fair gambles, and report results under various scenarios of probability weighting.Section 5 concludes with perspectives for further research.

Commutative Map of Prospect Theory's Augmented Function Space
To keep track of the myriad liftings and composite maps in Prospect Theory's function space, we modify the old adage "a picture is worth a thousand words" to "a commutative map is worth a thousand words".The diagram in Figure 1 plainly shows that the stochastic choice map f is a lifting of the imputed direct map g = v • f from state space S to the reals R. Further, v is a functional, of f , on X.So any action on v that yields another functional is an operator by definition.Compare Tversky and Khaneman (1992) mapping scheme in the introduction section 1 of this paper.Additionally, the composite direct map g • (w space Ω to the reals R, is a lifting of Y .In that case, for a given outcome x ∈ X, the map v(Y (ω)) is a functional.Thus, any action averaging over that quantity gives rise to an averaging operator.Further, the probability weight function w is a lifting of the direct map w = f •w from P(Ω) to S. Perhaps most important, the composite map w • P is a lifting of the direct map Y = f • (w • P) from sample space Ω to outcome space X.The stochastic choice functions in extant literature, see e.g., Debreu (1958) and McFadden (1974), considers a mapping P : Ω → S.But not the intermittent composite mapping w : P(Ω) → S which embeds probability weights in state space S, and indirectly in X through choice function f or directly through the composite w.The commutative map plainly shows that the introduction of probability weighting map w should be incorporated in any stochastic choice map f : S → X to account for probability distortions.In fact, Figure 1 includes the following complimentary space 3 that is the sui generis of this paper.
Definition 2.1 (Prospect Theory's Complimentary Space).Let A, B, C be the [dense] space bounded by the commutative map-defined respectively by 3 Our useage of "complimentary space" is different from common useage in Hilbert space theory.Even though one could perhaps treat the commutative map as one that includes vector valued functions.In which case, if | − → ΩX| is orthogonal to | −→ XR|, the complimentary angles subtended at X could be used to "define" the "complimentary space" they subtend.
Let M be Prospect Theory's function space such that M = A ⊕ B ⊕ C. Then B ⊕ C = M ⊖ A is Prospect Theory's complimentary space.Notationally we write A c for PT complimentary space.
Prospect Theory tends to focus on the space A in (2.1).In this paper, we focus on the space M ⊖ A or A c .The mapping Y in Figure 1 has the following interpretation.Since Y : Ω → X ⇒ Y (Ω) ⊆ X, there exists a lottery {(x 1 , p 1 ), (x 2 , p 2 ), . . ., (x n , p n )} or gamble such that Y (ω) takes the values (x 1 , x 2 , . . ., x n ) with corresponding joint probability distribution (p 1 , p 2 , . . ., p n ).So that for a given realization of outcomes,

Behavioral Stochastic Process
In this section we introduce the homotopy concept and use it to identify a behavioral stochastic process in PT function space.

Behavior mimicking homotopy
The following definitions are critical to this paper.
Let A, B be topological spaces and I be the unit interval I = {u|0 ≤ u ≤ 1}.Two mappings, t 1 , t 2 : A → B are said to be homotopic whenever there is a mapping is the identity map, so that A ⊂ B, then t 2 is a deformation.The set T (I , x) is the path of x.Whenever the spaces are metric, and the paths are all of diameter less than ε, we have an ε-homotopy, or an ε-homotopy as the case may be.
For any homotopy ψ : [0, 1] × Ω → S, and for any map Y lifting ψ, there exist a The commutative map in Figure 2 depicts the homotopy lifting property enunciated in 3.2.According to Figure 1 the mapping Y and f • (w • P) are candidates for homotopy maps from Ω to X. Specifically, let ψ(0 then there is a progressively measureable [discretized] behavioral path process . ., 2 n } that describes the deformation of stochastic choice function to a random variable in outcome space.That is, for which translates to an ε-homotopy sample path where η is an idiosyncratic "ε" error term.This implies observation that subjects change their mind over time, and that the behavior mimicking deformation ψ measures Y with error.It also, identifies Luce's conjecture that our understanding of a choice process is enhanced by accounting for the time taken to make it.See also, (Davidson and Marschak, 1958, pg. 1).In fact, we can write which plainly show that ψ is an intermediate map5 between stochastic choice f • w • P, and outcome Y (ω) for "time" evolution t (n) k .

Psychological distance and stopped behavioral processes
Due to measurement error or otherwise, the homotopy process is "stopped" by a subject before the choice deformation process is completed.So we want to measure the closeness of the stopped process to the target Y (ω).According to (Nosofsky, 1997, pg. 348) there exist a psychological distance 6 between ψ and Y which, in our case, can be represented by the metric This gives rise to the stopping time Nosofsky (1997); Nosofsky and Palmeri (1997) also report that ε ↓ 0 as follows7 .
For instance, they show that the similarity or proximity of the two functions8 is an exponential decay of their distance as follows , where w j is the attention weight given to distance.See also, Massa and Simonov (2005) who used a similar metric based on conditional variance from a Kalman filter of agents learning about stock prices.For instance, they posit X t+1 = AX t + u t and R t = BX t + v t , where X t is the state of the economy at time t, R t is a vector of portfolio returns, and is the "learning" metric.Inasmuch as our agents probability weights are included in the composite function f • w • P we exclude w j .Cf. Dawes (1979).Also, in keeping with standard metric in function space we used a sup-norm.
for some constant c.They let M j be the strength of conviction for a given choice where i, j ∈ {ψ,Y }, so that the degree to which, say, choice j is preferred is They also posit that the probability that the choice j is made at time t is given by Those parametrizations above seem to be fairly standard in the quantitative psychology literature on learning.See e.g., Norman (1968).
In our case, we modify Nosofsky and Palmeri (1997) time based density to space time (ρ,t) by adding a space dimension ρ.Let 0 ≤ ρ ≤ M < ∞.For the purpose of exposition, let a j (ρ) = ρ.So that f (ρ,t) = ρ exp(−ρt).For our Lebesgue density f (ρ,t) we need the following normalization Integration by parts shows that for ε(ρ) = ε > 0, the probability of the intermediate homotopy sample path process being stopped is For small ε, after some elementary algebra, that quantity reduces to Despite our [nonseperable] space-time modification of probability density in Equation 3.10, the probability Pr{|ψ In fact, we have the following Proposition 3.1.ψ(t, ω) is well defined for small probabilities.
Proof.Dudley (1967) introduced a class of probability metrics that diminishes with distance.For instance, where ψ is a Lipschitz continuous function with Lipschitz constant L, P is a Gaussian measure, and µ ψ is measure of location such as the mean or median of ψ.
Thus, the probability α of the process being stopped is uniform across time.If α, is large, then the probability of it not being stopped, β = 1 − α, is small.Under CPT, subjects overweigh β with w(β ) and underweigh α with w(α) provided α < p e < β .So even though ψ(t ∧ τ ε , ω) is a stopped stochastic choice process with probability α of being stopped before attaining the goal Y (ω), agents underestimate that process with distorted probability w(α).These de facto statistical inference about stochastic choice functions show that even Type I and Type II error are subject to distortion.For subjects tend to accept a stochastic choice process when they should reject it,, and vice versa.The foregoimg analysis gives rise to the following Proposition 3.2.Let f (ρ,t) = ρ exp(−ρt) M be a space-time probability density function, with psychological distance ρ, where 0 ≤ ρ ≤ M < ∞; 0 ≤ t ≤ ∞, and be a stopping time for the stochastic choice process ψ, where ε(ρ) ↓ 0. Then for any small ε the process is uniformly stopped with probability α = 1 M (1 − ε 2 ).
In addition to the foregoing, the following corollary is motivated by (Shao, 2007, pp. 129-131).Then the subjective probability of Type I error is given by w(α) > α, and vice versa for Type II error.

Behavioral submartingale processes
In this section we show how the stochastic choice problem evolves by and through intermediate homotopic maps, and construct a behavioral submartingale process for fair lotteries.As a preliminary matter, we have the following Proposition 3.4.The process ψ = {ψ(t Proof.In proposition 3.1 we showed that ψ is well defined for small probabilities.
Now we extend that definition to stopping times.From Equation 3.2, we use the stopped behavioral hypothesis as follows.
for some δ > 0 so that The latter relation is true for all δ > 0. Hence the proof is done.
Definition 3.3.Let H be the convex hull of homotopic maps in the commutative map in Figure 1.Then Proof.The proof is by induction.Let Y 1 (x 1 , p 1 ; 0, 1 − p 1 ) be a simple lottery in In what follows we suppress 0, 1 − p.Let Remark 3.1.According to this result, a gamble or lottery is an outcome with its own probability of winning or losing.Implicit in that statement is compound invariance by Prelec (1998) or the weaker reduction invariance by Luce (2001).
Then {ψ(t Proof.The proof rests on Doob-Meyer decomposition in Theorem 3.8.By hypothesis Y n is a martingale.Additionally by Lemma 3.6 ψ is an increasing sequence.Thus, E[ψ(t k−1 ).However, under Doob's Optional Sampling Theorem in Theorem 3.7 and Equation 3.1 above where Y 0 is the fair payoff for the lottery.Subtract η(t k−1 , ω) from both sides of the inequality to get from equation (3.1) For internal consistency with the stopped process in Proposition 3.2 we must have In which case we have a previsible increasing process.

Applications to Fair Gambles
In this section we apply some of the results above to subjects' response(s) to gambles.Let E be the objective expectations operator, and Ẽ be the subjective expectations operator.The homotopic lifting property posits and that So that So that for fair gambles Y (ω), under Doob's Optional Sampling Theorem Choose ε sufficiently large so that the probability 1 M (1 − ε 2 ) is small.(Berger, 1985, pp. 49-50) and(DeGroot, 1970, pp. 90-91) posited a set of "rationality axioms" for construction of utility functions for preferences over probability distributions, in which probability measures are discrete.Thus, in what follows we use discretized probabilities.Also, Prospect Theory tells us that, generally, subjects overweigh small probabilities and underweigh large probabilities10 .So that for Pr{|ψ(t Since all probabilities are small, the probability weighting function w implies By abuse of notation, assume that ε(τ ε , ω) = ε(τ ε ).In that setup the [unconditional] subjective expected value for the random variable ε(τ ε , ω) is given by For losses ℓ and gains g, let w(θ ε ) = θ ε + δ ℓ , δ ℓ > 0 (4.12) Upon further reduction we get By the same token, the unconditional objective expected value of the same random variable is Comparison of the expected values in equations (4.14) and (4.15) show that the quantity δ g − δ ℓ − 2θ ε is dispositive of a subject's perception of the underlying gamble.

Case i. Submartingale behavior for fair gambles
Assume that δ g − δ ℓ − 2θ ε > 0. Thus, the unconditional subjective expected value is greater than the unconditional objective expected value.
So that for any information set So that for the stopped behavioral process, by virtue of Doob's Optional Sampling Theorem, we get . Let S be state space, and Y be a fair gamble defined on Omega and taking values in outcome space X.
Let f • w • P be a composite stochastic choice function defined on S × Ω.Let with small probability.That is the behavior mimicking ε-homotopy is the range of admissible behavior.Subjects are risk averse, and evidently have strong loss aversion.

Case iii. Probability leakage for fair gambles
The interesting case here is when δ g − δ ℓ − 2θ ε = 0. Presumably there is no probability weighting because now w we are in a world of classic von Neuman-Morgenstern utility.Subjects know that the gamble is a martingale.So expectaions for the stopped behavioral process coincide However, the behavioral process was stopped with probability 1 M (1 − ε 2 ) before the behavior mimicking homotopic sample path was completely deformed into the fair gamble.Additionally, in the space-time density in equation (3.10), max ρ = M.So for a fair gamble we expect psychological distance ρ to be uniformly distributed with probability 1 M over the interval11 .Since subjects have "martingale beliefs", they arguably assign equal probability to winning or loosing at a given play of the gamble.In that case, the corresponding conditional probability of winning [or losing] is given by Thus, the subject's chances of winning [or losing] is less than 1 2 .In fact, the total probability of winning or loosing in this case is 1 − ε 2 < 1.The probability leakage of ε 2 induces a subprobability measure on the decision space.To compensate for this probability leakage subjects may have to renormalize the space-time probability density in equation (3.10) by replacing M with M(1 − ε 2 ).In that case, Perhaps most important, the subprobability feature implies that subjects assign asymmetric weights for martingales.To see this, in the scenario just described above, instead of a fair coin for deciding to gamble, let α be the weight assigned to losing, and β be the weight assigned to winning.So that now the conditional probabilities of loosing and winning is, respectively, α(1 − ε 2 ) and β (1 − ε 2 ).
We summarize this result with the following Proposition 4.2.Let f (ρ,t) be a space-time probability density function for psychological distance ρ.Let Y (ω) be a fair gamble.Assume that subjects have von Neuman-Morgenstern preferences over probability distributions.Let ψ(t be a stopped behavior sample path.Assume that subjects know that Y (ω) is a fair gamble, so that for ε > 0 small Then ε 2 is probability leakage, and (1 − ε 2 ) −1 is the compensating probability weight.

Conclusion
In this paper we augment Tversky and Khaneman (1992) Cumulative Prospect Theory's function space with: 1) a direct mapping from "states of nature", distorted by probability weighting, to state space; and 2) a mapping of lotteries from "states of nature" to outcome space.We show that a commutative map of that augmentation supports an ε-homotopy lifting property whereby composite stochastic choice functions are deformed into outcomes [or gambles].Due to measurement error or otherwise, ε-homotopy sample paths are behavior mimicking processes which are uniformly stopped by subjects' behavior before the deformation goal is reached.Moreover, we identify conditions under which subjects exhibit submartingale, supermartingale, and probability leakage in response to fair gambles.
Our results show that the commutative prospect space provides a rich topology for further research on construction of abstract behavioral stochastic processes that enhance our understanding of experimental results.

Figure 1 :
Figure 1: Commutative Map of Prospect Theory's Liftings for some index i.Additionally, let F Y be the probability distribution function of Y .So that for rank ordered Y we have the relation π y = w(F + Y (y)) − w(F − Y (y)) as the probability weight assigned to the simple lottery at the jump of F. In any event, the commutative diagram plainly shows how probabilities and or probability weights are embedded in outcome space X.The rest of this paper constitutes analytic proofs of these facts according as they apply to Cumulative Prospect Theory or otherwise.

Figure 2 :
Figure 2: Prospect Theory's Homotopy Lifting of State Space

7 )(
Baucells and Heukamp, 2009, pg. 3) introduced a probabiliity time dependent model ("PTT") by adding a probability dimension to an outcome space domain.They argue that probability and time are nonseperable such that an expected value function V (x, p,t) is time dependent through time dependent probability.Further, they characterized the 'total psychological distance" a = z + r(x)t where z = − ln(p), r(x)" is a "fade rate", and t is time.Op. cit.pp.11, 14.Given a psychological distance function d(•), they proposed a density function f (a) = exp(−d(a)).In the context of out parametrization, their density function is f (ρ,t) = exp(−ρ(z + r(x)t)) (3.8)where z = − ln(p), ; r(x) is a "fade rate" and t is time (3.9)