= DG.equilibrium(g) res, choicep
Identification in Dynamic Discrete Models
UBC ECON 567 Assignment
Please write your answers in a literate programming format, such as a Pluto, Jupyter, or Quarto notebook. Turn in both the notebook file and an html or pdf.
Simulation and Estimation
Adapt the equilibrium calculation, simulation, and estimation code in dyanmicgame.jl
to compute an equlibrium, simulate, and estimate with a single agent firm (N=1
). Most of the code will just work, except the transition
function, but it is not needed here. Check your code by simulating some data with N=1
and Nexternal=2
, and then estimating the model with the simulated data. When simulating, set T
to 20_000
so that it easier to distinguish estimation noise from some other problem. Make a table similar to one in dynamicgame.jl
at the end of the “Estimation” section comparing the true payoffs and estimated payoffs.
Fitted and True Choice Probabilities
The function DG.equlibrium
returns a tuple consisting the output of NLsolve.solve
and the equilibrium choice probabilities.
The equilibrium choice probabilities are in a 3-dimensional array of size number of players by number of actions by number of states. choicep[i,a,s]
is the probability player i
chooses action a
in state s
.
Compare the true choice probabilities with the estimated choice probabilities from the model. To calculate the estimated choice probabilities, create a new DG.DynamicGame
with the payoff function given by the estimated payoffs from problem 1. You may use the code below to get started. Create a table and/or figure that compares the estimated and true choice probabilities.
function createufunction(Eu::AbstractArray)
= size(Eu,1)
N = size(Eu,3)
Nstates = size(Eu,2)
Nchoices =Int(log2(Nstates))-N
Nexternal= BitVector.(digits.(0:(2^(N+Nexternal)-1), base=2, pad=N+Nexternal))
states @show states
= Dict(statevec(x)=>x for x in 1:length(states))
statedict u(i,a,x::Integer) = Eu[i,a[1],x]
u(i,a,s::AbstractVector) = u(i,a,statedict[s])
return(u)
end
= createufunction(Eu) # assuming you used Eu as the estimated payoffs in Problem 1
û = DG.DynamicGame(N,û, 0.9, Ex, 1:2, 1:ns)
ĝ = DG.equilibrium(ĝ)
res, choicep̂
# create table and/or figure comparing choicep̂ and choicep
Counterfactual Choice Probabilities
Suppose the payoff of action 2 in states 2, 4, 6, and 8 is decreased by 0.25. Compute the true and estimated change in choice probabilities. Compare the true and estimated change in choice probabilities in a figure or table.
You can create an appropriate shifted payoff function and new choice probabilities with the following code.
u2(i,a,s) = u(i,a,s) + (s % 2 == 0)*(-0.25)
= DG.DynamicGame(N, u2, 0.9, Ex, 1:2, 1:ns)
g2 = DG.equilibrium(g2) res2, choicep2
Incorrect Payoff Normalization
The estimation code assumes the payoff of action 1 is 0 in all states. What if this assumption is incorrect? To explore what happens, simulate data where the payoff of action 1 is -(s-3.5)/5*(s % 2==1)
in state s
, and the payoff of action 2 is the same as in problems 1-3. Then estimate the model assuming the payoff of action 1 is 0. Finally, calculate the change in conditional choice probabilities from decreasing the payoff of action 2 in states 2, 4, 6, and 8 by 0.25 as in problem 3. Does an incorrect normalization affect the estimated change in choice probabilities?
Shift in Transitions
Repeat the analysis in problem 4, but instead of a shift in payoffs, suppose the transition probability of the exogenous state changes. Consider a change of Ex
with pstay=0.7
, to pstay=0.9
. Comment on your findings.
Implications
Read Kalouptsidi, Scott, and Souza-Rodrigues (2021). What findings of theirs do the above simulations illustrate?
For further reading, consider looking at Kalouptsidi, Scott, and Souza-Rodrigues (2017) and Kalouptsidi et al. (2024).