Welfare distance is very easy to compute given analytical resource expansion path. For each planner preference, the solution does not generate optimal allocation for a particular level of aggregate resources, but generates queue of allocations that uniquely define allocation along the entire resource expansion. Because of this, it is possible to trivially compute value given optimal choices at each incremental point of the resourc expansion path.

The value along the resource expansion path that is dependent on preference is stored inside the df_queue_il_long_with_V dataframe's variable svr_V_star_Q_il. Note in a normal problem the resource expansion path goes up to infinity, but given the upper bounds in the individual allocations, the resource expansion path is finite where the final point is equivalent to the sum of maximum allocations across individuals. In the resource expansion path dataframe df_queue_il_long_with_V, additional variables needed are: svr_rho and svr_rho_val for the \(\rho\) key and value; svr_inpalc which is the queue ranking number, but is also equivalent to the current aggregate resource level. If there are 2 individuals with in total at most 11 units of allocations, and the problem was solved at three different plann preference levels, this dataframe would have \(11 \cdot 3\) rows.

On the othe rhand, we need from dataframe df_input_ib information on alternative allocations. If there are two individuals, this dataframe would only have two rows. There are three variables needed: A_i_l0 for the needs at allocation equal to zero; alpha_o_i for the effectiveness measured given cumulative observed allocation for each individual \(i\); and also needed for value calculation beta_i. Note that these are the three ingredients that are individual specific.

ffp_opt_anlyz_sodis_rev(
  ar_rho,
  it_w_agg,
  df_input_ib,
  df_queue_il_long_with_V,
  svr_rho = "rho",
  svr_rho_val = "rho_val",
  svr_A_i_l0 = "A_i_l0",
  svr_alpha_o_i = "alpha_o_i",
  svr_inpalc = "Q_il",
  svr_beta_i = "beta_i",
  svr_measure_i = NA,
  svr_mass_cumu_il = "mass_cumu_il",
  svr_V_star_Q_il = "V_star_Q_il"
)

Arguments

ar_rho

array preferences for equality for the planner, each value from negative infinity to 1

it_w_agg

integer data/observed aggregate resources, \(\hat{W}^{o}\).

df_input_ib

dataframe of \(A_{0,i}\) and \(\alpha_{o,i}\), constructed based on individual \(A\) without allocation, and the cumulative aggregate effects of allocation given what is oboserved. The dataframe needs three variables, \(A_{0,i}\), \(\alpha_{o,i}\) and \(\beta_{i}\). Note that an ID variable is not needed. Because no merging is needed. Also note that \(\rho\) values are not needed because that will be supplied by df_queue_il_long_with_V.

df_queue_il_long_with_V

dataframe with optimal allocation resource expansion results, including the value along resource expansion so that observed value can be compared to.

svr_A_i_l0

string variable name in the df_input_ib dataframe for \(A_{0,i}\).

svr_alpha_o_i

string variable name in the df_input_ib dataframe for \(\alpha_{o,i}\).

Author

Fan Wang, http://fanwangecon.github.io

Examples

data(df_opt_caschool_input_ib)
df_input_ib <- df_opt_caschool_input_ib