Simulations for the KW model with a finite number of agents and inflation.

This page provides an interface for simulations of the KW economy both with an infinite number of agents using the theoretical framework in the work by Bonetto and Iacopetta and a finite number of agents using direct simulation. The code is based on Emscipten while the interface is coded in html+javascript. Tex rendering is based on Mathjax and the graphs are displayed by Chartjs. Here you can find the C and javascript sources for this page.

Since the simulation in the lab will be in discrete time period, I have adapted the results in Bonetto and Iacopetta to a discrete time system. For reference, in the infinite case, I'll report also the results for a continuous time economy. In the discrete time case, I assume that sequestration of the good/money happens right after exchange and consuption.

Simulation Setup

In the following \(N\) is the number of agents that will be assumed to be a a multiple of 6. This is so because we have \(N/3\) agents of each type and we need \(N\) to be even to have full matchings.

Number of agents \(N\)=

The quantity of money \(M\) must be an integer less than \(N\), the money rate in the economy will than be \(m=N/M\).

Money \(M\)=

The rate of money sequestration \(\delta_s\) is the probability that an agent holding money will see its holding taken and will have to produce a new good just after the exchange phase in the discrete time setting.

Rate of money sequestration \(\delta_s\): .

The cost \(c_i\) of carrying good \(i\) can be any positive number.

Cost of Carrying a good: Good 1 \(c_1\)= Good 2 \(c_2\)= Good 3 \(c_3\)=

We need to set the utility of consuption \(U\), disutility of production \(D\) and discount rate \(\rho\).

Net Utility of Consumption: \(U-D=\), Disutility of Production: \(D\)=, Discount rate: \(\rho\)=.

For better comparison between continuous and descrete time, we may adjust the parameters of the continuous time system and set \(\tilde\delta_s=-\log(1-\delta_s)\) as the sequestation rate in the continuous time system, \(\tilde\rho=-\log(1-\rho)\) as the discount rate and \(\tilde c_i=c_i/a\) with \(a=(1-\exp(-\tilde\rho))/\tilde\rho\) for the storage costs.

Rates:

The strategy are defined by the values \(A^i_{j,k}\) where \(A^i_{j,k}=1\) means that an agent of type \(i\) is willing to exchange good \(j\) for good \(k\) with good 0 representing money. As explained in the paper, for each type, it is enough to select the three value \(s^i_j\), where \(s^i_1=1\) means that agent \(i\) is willing to exchange good \(i+1\) for money, \(s^i_2=1\) means that agent \(i\) is willing to exchange good \(i+2\) for money and finally \(s^i_2=1\) means that agent \(i\) is willing to exchange good \(i+1\) for good \(i+2\).

We can also decide that the agents are only partially rational and they act randomly with probability \(p\). That is, with probability \(1-p\) they follow their assigned strategy while with probability \(p\) they flip a fair coin. This holds for every meeting except those involving the agent consumption good and we can decide whether the agent are rational when dealing with money.

Probability of random behavior: \(p\)=.

Infinite System

We can now look for the Nash equilibrium in an infinite system with the above parameters or check whether a particular set of strategies is Nash. Click the button below to select a particular set of strategies and check whether it is Nash. Otherwise the algorithm will look for Nash strategies. If none is found, the last attempt will be dislpayed and the table will be greyed. If more then one is present, the algorithm stopat the first it find. Since the attempt start from a random initial point, run it several times to check for multiple equilbria.

We are now ready to run our simulations.


1: \(i+1\) ↔  money2: \(i+2\) ↔  money3: \(i+1\) ↔  \(i+2\)
Agent 1 1 10
Agent 2 1 11
Agent 3 1 10

Continuous timeDiscrete time
Steady state
Type 1Type 2Type 3
Good 10.000000.000000.00000
Good 20.000000.000000.00000
Good 30.000000.000000.00000
Money0.000000.000000.00000
Type 1Type 2Type 3
Good 10.000000.000000.00000
Good 20.000000.000000.00000
Good 30.000000.000000.00000
Money0.000000.000000.00000
Value Functions
Type 1Type 2Type 3
Good 10.000000.000000.00000
Good 20.000000.000000.00000
Good 30.000000.000000.00000
Money0.000000.000000.00000
Type 1Type 2Type 3
Good 10.000000.000000.00000
Good 20.000000.000000.00000
Good 30.000000.000000.00000
Money0.000000.000000.00000
Welfare
Type 1Type 2Type 3
Good 30.000000.000000.00000
Average0.00000
Type 1Type 2Type 3
Good 30.000000.000000.00000
Average0.00000

Finite System

We first run a long simulation for the discrete system. We call \(N_{i,j}\) the number of agents \(i\) with good \(j\) and we give an estimate of the average value of \(N_{i,j}\), \(p_{i,j}=N_{i,j}/N\)and of the value functions \(V_{i,j}\) on the steady state, assuming that the agents always use the selected strategies. Notice that even if such strategies are a Nash equilibrium for the infinite system they may fail to be optimal for every distribution of goods in the finite system.

To check for Nash equilibria we can also allow one agent to play a strategy different from the others for a finite time.

Press the button if you want to select a representative agent.




\(p_{i,j}\)\(N_{i,j}\) Value Function
Type 1Type 2Type 3Type 1Type 2Type 3 Type 1Type 2Type 3
Good 10.00000.00000.00000.00000.00000.0000 0.00000.00000.0000
Good 20.00000.00000.00000.00000.00000.0000 0.00000.00000.0000
Good 30.00000.00000.00000.00000.00000.0000 0.00000.00000.0000
Money0.00000.00000.00000.00000.00000.0000 0.00000.00000.0000
Wellfare0.00000.00000.0000

We can now look at the finite system and compute the value function from a given initial state. The suggested values are the best integer approximation of the steady state \(N_{i,j}\) average. This is probably the best choice for the initial endowment in the lab. They can be modified to check the behaviour of the system when it starts form a different initial state. To compute the value functions we average on a large number of trajectory starting from the given initial state. The trajectory are evolved long enough to make any further value to be discounted to irrelevance. To obtain faster or more precise results we can modify the number of trajectories we average on.

Number of trajectories \(N_{av}\)=

We also compute the expected number of points and agent of type \(i\) that holding initially good \(j\) will have if the game is terminated ad the end of a period with probability \(\rho\). The table reports the mean value ± the standard deviation.


Initial StateValue Function
Type 1Type 2Type 3TotalsType 1Type 2Type 3
Good 10 0.00000.00000.0000
Good 20 0.00000.00000.0000
Good 30 0.00000.00000.0000
Money 0.00000.00000.0000
Totals0.00000.00000.0000

To compare with lab result we computed the average number of point gained by the agents in oner period of the market. In the short run case we averaged along a round of random length as in the laboratory experimenrtts. In the long run case we averaged along a long trajectory to check have steady state values.


Average Points per play (short run)
Type 1Type 2Type 3Averages
Good 10.00000.00000.00000.0000
Good 20.00000.00000.00000.0000
Good 30.00000.00000.00000.0000
Money0.00000.00000.00000.0000
Averages0.00000.00000.00000.0000
Average Points per play (long run)
0.00000.00000.00000.0000


We expect the goods distribution among agents after a long time to described by an invariant measure on the space of distributions. To have an idea of the rate of convergence we computed the evolution of the mean value and variance of the \(p_{i,j}\).

Press the button to see the graphs of the evolution of the \(p_{i,j}\) and their variances.




Stock distribution infinite system

\begin{tabular}[c]{ c | c c | c c c | c c c | c c c }
\hline\hline
Case & $M$ & $\delta_m$ & $p_{1,2}$ & $p_{1,3}$ & $p_{1,m}$ & $p_{2,3}$ & $p_{2,1}$ & $p_{2,m}$ & $p_{3,1}$ & $p_{3,2}$ & $p_{3,m}$\\
\hline
\hline
\end{tabular}

Value functions infinite system

\begin{tabular}[c]{ c | c c | c c c | c c c | c c c | c}
\hline\hline
Case & $M$ & $\delta_m$ & $V_{1,2}$ & $V_{1,3}$ & $V_{1,m}$ & $V_{2,3}$ & $V_{2,1}$ & $V_{2,m}$ & $V_{3,1}$ & $V_{3,2}$ & $V_{3,m}$ & Strategies\\
\hline
\hline
\end{tabular}

Welfare infinite system

\begin{tabular}[c]{ c | c c | c c c | c }
\hline\hline
Case & $M$ & $\delta_m$ & $W_1$ & $W_2$ & $W_3$ & $W$\\
\hline
\hline
\end{tabular}

Stock distribution finite system

\begin{tabular}[c]{ c | c c | c c c | c c c | c c c }
\hline\hline
Case & $M$ & $\delta_m$ & $p_{1,2}$ & $p_{1,3}$ & $p_{1,m}$ & $p_{2,3}$ & $p_{2,1}$ & $p_{2,m}$ & $p_{3,1}$ & $p_{3,2}$ & $p_{3,m}$\\
\hline
\hline
\end{tabular}

Value functions finite system

\begin{tabular}[c]{ c | c c | c c c | c c c | c c c }
\hline\hline
Case & $M$ & $\delta_m$ & $V_{1,2}$ & $V_{1,3}$ & $V_{1,m}$ & $V_{2,3}$ & $V_{2,1}$ & $V_{2,m}$ & $V_{3,1}$ & $V_{3,2}$ & $V_{3,m}$\\
\hline
\hline
\end{tabular}

Welfare finite system

\begin{tabular}[c]{ c | c c | c c c | c }
\hline\hline
Case & $M$ & $\delta_m$ & $W_1$ & $W_2$ & $W_3$ & $W$\\
\hline
\hline
\end{tabular}

Value functions finite system given initial point

\begin{tabular}[c]{ c | c c | c c c | c c c | c c c }
\hline\hline
Case & $M$ & $\delta_m$ & $V_{1,2}$ & $V_{1,3}$ & $V_{1,m}$ & $V_{2,3}$ & $V_{2,1}$ & $V_{2,m}$ & $V_{3,1}$ & $V_{3,2}$ & $V_{3,m}$\\
\hline
\hline
\end{tabular}

Welfare finite system given initial point

\begin{tabular}[c]{ c | c c | c c c | c }
\hline\hline
Case & $M$ & $\delta_m$ & $W_1$ & $W_2$ & $W_3$ & $W$\\
\hline
\hline
\end{tabular}

Points finite system given initial point

\begin{tabular}[c]{ c | c c | c c c | c }
\hline\hline
Case & $M$ & $\delta_m$ & $\omega_1$ & $\omega_2$ & $\omega_3$ & $\omega$\\
\hline
\hline
\end{tabular}




Write To File




Write To File