This page provides an interface for simulations of the KW economy both with an infinite number of agents using the theoretical framework in the work by Bonetto and Iacopetta and a finite number of agents using direct simulation. The code is based on Emscipten while the interface is coded in html+javascript. Tex rendering is based on Mathjax and the graphs are displayed by Chartjs. Here you can find the C and javascript sources for this page.
Since the simulation in the lab will be in discrete time period, I have adapted the results in Bonetto and Iacopetta to a discrete time system. For reference, in the infinite case, I'll report also the results for a continuous time economy. In the discrete time case, I assume that sequestration of the good/money happens right after exchange and consuption.
In the following \(N\) is the number of agents that will be assumed to be a a multiple of 6. This is so because we have \(N/3\) agents of each type and we need \(N\) to be even to have full matchings.
The quantity of money \(M\) must be an integer less than \(N\), the money rate in the economy will than be \(m=N/M\).
The rate of money sequestration \(\delta_s\) is the probability that an agent holding money will see its holding taken and will have to produce a new good just after the exchange phase in the discrete time setting.
The cost \(c_i\) of carrying good \(i\) can be any positive number.
We need to set the utility of consuption \(U\), disutility of production \(D\) and discount rate \(\rho\).
For better comparison between continuous and descrete time, we may adjust the parameters of the continuous time system and set \(\tilde\delta_s=\log(1\delta_s)\) as the sequestation rate in the continuous time system, \(\tilde\rho=\log(1\rho)\) as the discount rate and \(\tilde c_i=c_i/a\) with \(a=(1\exp(\tilde\rho))/\tilde\rho\) for the storage costs.
The strategy are defined by the values \(A^i_{j,k}\) where \(A^i_{j,k}=1\) means that an agent of type \(i\) is willing to exchange good \(j\) for good \(k\) with good 0 representing money. As explained in the paper, for each type, it is enough to select the three value \(s^i_j\), where \(s^i_1=1\) means that agent \(i\) is willing to exchange good \(i+1\) for money, \(s^i_2=1\) means that agent \(i\) is willing to exchange good \(i+2\) for money and finally \(s^i_2=1\) means that agent \(i\) is willing to exchange good \(i+1\) for good \(i+2\).
We can also decide that the agents are only partially rational and they act randomly with probability \(p\). That is, with probability \(1p\) they follow their assigned strategy while with probability \(p\) they flip a fair coin. This holds for every meeting except those involving the agent consumption good and we can decide whether the agent are rational when dealing with money.
We can now look for the Nash equilibrium in an infinite system with the above parameters or check whether a particular set of strategies is Nash. Click the button below to select a particular set of strategies and check whether it is Nash. Otherwise the algorithm will look for Nash strategies. If none is found, the last attempt will be dislpayed and the table will be greyed. If more then one is present, the algorithm stopat the first it find. Since the attempt start from a random initial point, run it several times to check for multiple equilbria.
We are now ready to run our simulations.
1: \(i+1\) ↔ money  2: \(i+2\) ↔ money  3: \(i+1\) ↔ \(i+2\)  

Agent 1  1  1  0 
Agent 2  1  1  1 
Agent 3  1  1  0 
Continuous time  Discrete time  

Steady state  



Value Functions  



Welfare  


We can now check if the equilibrium obtained is Nash:
Continuous  Discrete  

Agent 1  
Agent 2  
Agent 3  
We first run a long simulation for the discrete system. We call \(N_{i,j}\) the number of agents \(i\) with good \(j\) and we give an estimate of the average value of \(N_{i,j}\), \(p_{i,j}=N_{i,j}/N\)and of the value functions \(V_{i,j}\) on the steady state, assuming that the agents always use the selected strategies. Notice that even if such strategies are a Nash equilibrium for the infinite system they may fail to be optimal for every distribution of goods in the finite system.
To check for Nash equilibria we can also allow one agent to play a strategy different from the others for a finite time.
Type \(i\)  1: \(i+1\) ↔ money  2: \(i+2\) ↔ money  3: \(i+1\) ↔ \(i+2\)  Good  Time 

\(p_{i,j}\)  \(N_{i,j}\)  Value Function  

Type 1  Type 2  Type 3  Type 1  Type 2  Type 3  Type 1  Type 2  Type 3  
Good 1  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000 
Good 2  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000 
Good 3  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000 
Money  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000 
Wellfare  0.0000  0.0000  0.0000 
We can now look at the finite system and compute the value function from a given initial state. The suggested values are the best integer approximation of the steady state \(N_{i,j}\) average. This is probably the best choice for the initial endowment in the lab. They can be modified to check the behaviour of the system when it starts form a different initial state. To compute the value functions we average on a large number of trajectory starting from the given initial state. The trajectory are evolved long enough to make any further value to be discounted to irrelevance. To obtain faster or more precise results we can modify the number of trajectories we average on.
We also compute the expected number of points and agent of type \(i\) that holding initially good \(j\) will have if the game is terminated ad the end of a period with probability \(\rho\). The table reports the mean value ± the standard deviation.
Initial State  Value Function  

Type 1  Type 2  Type 3  Totals  Type 1  Type 2  Type 3  
Good 1  0  0.0000  0.0000  0.0000  
Good 2  0  0.0000  0.0000  0.0000  
Good 3  0  0.0000  0.0000  0.0000  
Money  0.0000  0.0000  0.0000  
Totals  0.0000  0.0000  0.0000 
To compare with lab result we computed the average number of point gained by the agents in oner period of the market. In the short run case we averaged along a round of random length as in the laboratory experimenrtts. In the long run case we averaged along a long trajectory to check have steady state values.
Average Points per play (short run)  

Type 1  Type 2  Type 3  Averages  
Good 1  0.0000  0.0000  0.0000  0.0000 
Good 2  0.0000  0.0000  0.0000  0.0000 
Good 3  0.0000  0.0000  0.0000  0.0000 
Money  0.0000  0.0000  0.0000  0.0000 
Averages  0.0000  0.0000  0.0000  0.0000 
Average Points per play (long run)  
0.0000  0.0000  0.0000  0.0000 
Here we show the evolution of the mean values of \(p_{1,j}\) and their variance as a function of time.
Then we show the evolution of the the mean values of \(p_{2,j}\) and their variance as a function of time.
And we show the evolution of the the mean values of \(p_{3,j}\) and their variance as a function of time.
Finally we look at the total variation distance between the distribution of the \(p_{i,j}\) at time \(t\) and the final ditribution.