In this paper, we exploit the computational gains that derive from the robustness to multicollinearity of neural networks to extend the optimal debt management problem studied by Faraglia et al. (2019) to four maturities. The hedging benefits provided by the additional maturities allow the government to respond to expenditure shocks by raising financial income without increasing the total outstanding debt. Through this mechanism, the government effectively subsidizes the private sector in recessions.
We use a neural network (NN) in a supervised machine learning fashion to approximate the expectation terms typically contained in the optimality conditions of an economic model, in the spirit of the Parameterized Expectations Algorithm (PEA) with stochastic simulation, introduced by den Haan and Marcet (1990) and in a similar fashion to Duffy and McNelis (2001). On the one hand, stochastic simulation methods allow us to tackle problems with a high number of state variables, since they calculate solutions only in the states that are visited in equilibrium (i.e., the ergodic set). On the other hand, when the set of state variables is generated by a stochastic simulation, it is likely to suffer from multicollinearity. In this context, this paper makes two contributions. First, we show that an NN-based expectations algorithm can deal efficiently with multicollinearity by extending the optimal debt management problem studied by Faraglia et al. (2019) to four maturities. Second, we show that the optimal debt management policy prescribes an active role for the medium-term maturities, enabling the planner to raise financial income without increasing its total borrowing in response to expenditure shocks. We consider this problem a particularly interesting economic application that also poses significant computational challenges for four reasons.
First, the number of state variables increases in the number and length of maturities available. Second, this class of problems includes forward-looking constraints, and the problem can be made recursive at the cost of adding even more state variables. Following Marcet and Marimon (2019), we formulate the recursive Lagrangian to solve for the time-inconsistent optimal policy under full commitment with multiple maturities. When markets are incomplete, the Ramsey planner needs to keep track of all promises made in the previous periods. Because of these reasons, optimal maturity management problems suffer from the curse of dimensionality (see Bellman (1961)). For example, the optimal debt management problem with four maturities considered in Section 4 features 46 state variables. Third, because of the maturities, many of these state variables are multicollinear when the model is solved by using a stochastic simulation approach.1 Fourth, this class of problems does not have a stochastic steady state, as documented in Aiyagari, Marcet, Sargent, and Sappala (2002), and tends to frequently hit the borrowing and lending constraints. Such properties render the model particularly hard to solve using perturbation methods around a particular point.2
In Section 4, we use the methodology to study the optimal government debt management policy when the Ramsey planner can issue an increasing number of debt instruments with different maturities. Intuitively, the prices of longer maturities are typically more responsive to shocks than prices of shorter maturities. This differential response creates opportunities for hedging by borrowing in long-term and saving in short-term bonds. In this case, the value of liabilities falls by more than the value of assets in response to negative shocks (see Angeletos (2002), Buera and Nicolini (2004) and Faraglia et al. (2019)). Additionally, the fact that short bond prices are not as responsive to shocks allows the planner to smooth the price of new debt issuance by rebalancing the portfolio toward the longer maturities in economic booms and toward the shorter maturities in recessions. We find that the planner actively uses the additional medium-term maturities to exploit both the hedging and the price smoothing benefits. The government holds leveraged positions in all bonds and rebalances the portfolio with more emphasis on the shorter maturities in recessions. We find that, when the number of available maturities increases from two to three (and four), the total amount of outstanding debt becomes procyclical. The additional maturities allow the government to respond to expenditure shocks by raising financial income without increasing the total outstanding debt. Through this mechanism, the government effectively subsidizes the private sector in recessions, resulting in higher leisure and less volatile labor taxes.
Literature reviewThis paper contributes to two strands of literature: (i) numerical methods in economics and (ii) optimal fiscal policy.
In terms of methods, this paper builds on the seminal work of den Haan and Marcet (1990), who introduced PEA. The idea of using neural networks to parameterize decision rules in a similar fashion to PEA goes back to Duffy and McNelis (2001). Our paper contributes to this literature by showing that an NN-based expectations algorithm can deal efficiently with multicollinearity by extending the optimal debt management problem studied by Faraglia et al. (2019) to more than two maturities. In particular, we exploit the computational gains to study the optimal government debt management problem of Faraglia et al. (2019) with three and four maturities, which yields new economic insights. Note that PEA has been extended more recently (see Faraglia, Marcet, Oikonomou, and Scott (2014) and Faraglia et al. (2019)) to deal with multicollinearity (condensed PEA) and overidentification (Forward-States PEA). Our methodology builds on condensed PEA and Forward-States PEA, in the context of optimal fiscal policy, allowing for machine learning to reduce the state space endogenously and handling multicollinearity effectively when a stochastic simulation approach is adopted. In contrast, condensed PEA achieves this result by introducing an external loop that tests a subset of the state space as a candidate to solve the model.
In contemporaneous work, Maliar, Maliar, and Winant (2021) and Maliar and Maliar (2022) discuss how neural networks can handle multicollinearity. In particular, they do so in the context of the Krusell and Smith (1998) model. We complement their work by demonstrating the robustness to multicollinearity in the context of optimal fiscal policy. Additionally, we show that the interaction between the capability of a neural network to deal with multicollinearity and its flexibility in approximating generic policy functions plays an important role in generating unbiased predictions. In Section 2.5, we show that if a researcher precommits to approximate the policy functions with polynomials that are misspecified then, under multicollinearity among the state variables, the predictions will be biased. Thanks to the flexible nonparametric nature of a neural network, which does not require making ex ante assumptions about the functional form of the policy functions, this problem disappears.
PEA can potentially be used in combination with other standard econometric techniques that tackle the problem of multicollinearity, as in Judd, Maliar, and Maliar (2011). Similar to our paper, Judd, Maliar, and Maliar (2011) adopt a stochastic simulation approach and show how already established methods in econometrics can be used to alleviate the multicollinearity problem using a multicountry neoclassical growth model. We discuss the relation between our method and the methods of Faraglia et al. (2019) and Judd, Maliar, and Maliar (2011) in greater detail in Section 5.
Other papers that use machine learning to solve economic models include Scheidegger and Bilionis (2019), Azinovic, Gaegauf, and Scheidegger (2022), Fernández-Villaverde, Hurtado, and Nuño (2023), and Duarte, Duarte, and Silva (2023). Fernández-Villaverde, Hurtado, and Nuño (2023) use deep neural networks to approximate the aggregate laws of motion in a heterogeneous agents model featuring strong nonlinearities and aggregate shocks. Duarte, Duarte, and Silva (2023) casts the economic model in continuous time and uses neural networks to approximate the Bellman equation. Maliar, Maliar, and Winant (2021) and Azinovic, Gaegauf, and Scheidegger (2022) approximate all the model equilibrium conditions using neural networks and use the simulated data to train them. Azinovic, Gaegauf, and Scheidegger (2022) solve the life-cycle model with borrowing constraints, aggregate shocks, and financial frictions using unsupervised machine learning. The main difference of our paper is to leverage on supervised machine learning to deal effectively with the problem of multicollinearity typical of stochastic simulation approaches. In this context, we show how our algorithm can alleviate the curse of dimensionality, allowing us to explore the problem of the optimal maturity structure of government debt in a more realistic environment.
Our application also contributes to the strand of literature on optimal fiscal policy. In particular, it is relevant to the literature on the optimal maturity structure of government debt.3 Lustig, Sleet, and Yeltekin (2008) find that the optimal policy prescribes an almost exclusive role to the longest maturity in a model with no-lending constraints and a New Keynesian model where bonds are nominal. In our setting, we allow for government lending and study the hedging benefits of a choice between multiple maturities of real bonds. Bhandari et al. (2017b) study the optimal maturity structure in an open economy with two maturities, and Bigio, Nuño, and Passadore (2023) allow for a finite number of maturities in an economy with liquidity costs of issuing debt, where liquidity costs differ by maturity. Faraglia et al. (2019) is the closest paper to ours and studies the role of frictions in a closed economy with two types of bonds. Solving the Ramsey problem considered in this paper is particularly challenging, as the dimension of its state space increases significantly in function of the length of the maturities and the number of bonds. Moreover, this class of problem includes forward-looking constraints, so the commonly used recursive representation can not be adopted. Marcet and Marimon (2019) provide an alternative formulation to solve for the time-inconsistent optimal contract under full commitment: a recursive Lagrangian or saddle-point functional equation. The solution involves adding even more state variables to the original problem. These additional state variables, necessary to recursify the problem, create history dependence. In this context, we use our methodology to extend the literature to study optimal debt management with three and four maturities in a closed economy. We find that the optimal policy prescribes an active role for the medium-term bonds. The additional maturities enable the planner to raise financial revenue without increasing the total outstanding debt, in response to a positive expenditure shock. We show that, through this mechanism, the government uses the additional maturities to effectively subsidize the private sector in recessions, resulting in more leisure and less volatile labor taxes.
The paper is organized as follows. Section 2 is a user guide that introduces the reader to PEA, machine learning, and how to combine them in a simple Neoclassical Investment Model example. Section 3 introduces the reader to the problem of multicollinearity using a one-bond economy studied in Aiyagari et al. (2002) and describes the details of the NN-based expectations algorithm using a general model with N maturities. Section 4 presents and discusses the calibration and the quantitative results for the extended model with three and four maturities. Section 5 discusses and compares the NN-based expectations algorithm to other state-of-the-art methods. Section 6 concludes.
User guide: Machine learning and PEAThis section serves as an introduction to supervised machine learning. Specifically, it focuses on how to use it to solve a dynamic economic model in a similar fashion to PEA with stochastic simulation.4 Hence, the purpose of this section is solely to introduce the methodology in a simple environment. The method allows us to investigate more realistic models of increased complexity. Its benefits are highlighted in the application presented in Section 3 and arise from the ability of the algorithm to approximate nonlinear policy functions in the presence of a large and multicollinear state space.
EnvironmentThe typical dynamic model contains intertemporal Euler equations, intratemporal Euler equations, and laws of motion [Image Omitted. See PDF] where is a vector of C controls (with E dynamic choices and static choices), is a vector of endogenous and exogenous state variables, is a time-discount factor, , , , and is a vector of innovation shocks. For example, in the stochastic neoclassical investment model, corresponds to consumption, corresponds to the marginal utility of consumption, does not apply if the model does not include intratemporal choices (e.g., labor), , is a vector that contains capital stock and TFP, and is a function that describes the laws of motion for capital stock, given by the resource constraint and the TFP Markov process, that is, [Image Omitted. See PDF] The typical PEA approximates the conditional expectations in the intertemporal Euler equations as polynomial functions of the state space , [Image Omitted. See PDF] The polynomial typically used in the PEA is [Image Omitted. See PDF] where . For a given sequence of exogenous aggregate shocks , an initial guess of the polynomials' parameters , the standard stochastic PEA (described in Algorithm 1) aims to find parameters that solve all Euler equations and all laws of motion.
When is generated by a stochastic simulation as in Algorithm 1, the matrix is often ill-conditioned.5 Hence, with a finite-precision computer, the inverse of cannot be computed reliably and it is challenging to compute the linear regression in line 9 of Algorithm 1. This problem potentially leads to jumps in the regression coefficients and failure to converge.
Moreover, in the simple illustrative case of the neoclassical investment model, a first- order polynomial () is enough to approximate the expectation term in the Euler equation. Generically speaking, richer models that feature a larger state space and non-linearities require the use of higher-order approximation () and/or cross-state terms. These circumstances further aggravate the multicollinearity problem as the matrix , with , is even more ill-conditioned.
Supervised machine learningIn this paper, we use machine learning as a tool to learn how to represent the function that maps from the set of simulated state variables to the set of simulated terms . For example, in the neoclassical investment model with log-utility over consumption, this would serve the purpose of representing the function [Image Omitted. See PDF] Machine learning proposes a flexible structure for the function P and infers a function from the generated data (which we label training data) to the set of generated examples (which we label training examples). This particular task of using machine learning to learn a function that maps from inputs to outputs based on training data and examples is referred in the literature as supervised learning. And neural networks are a powerful class of universal approximators able to deal with strong nonlinearities.
Fitting neural networksIn the NN-based expectations algorithm, the equivalent of the regression phase is called the training phase. As described in Supplemental Appendix C (Valaitis and Villa (2024)), a neural network is characterized by unknown weights .6 Similar to a regression, the objective is to seek weights such that the neural network fits the samples . More precisely, the problem is to find [Image Omitted. See PDF] such that the sum of squares [Image Omitted. See PDF] is minimized. In a standard linear regression setting, typically (but not necessarily) this problem is solved analytically. This problem could also be solved using a gradient iterative procedure (e.g., gradient descent). This approach is typically more robust to multicollinearity since it does not require inverting the matrix . An iteration n of gradient descent updates the weights of the neural network according to [Image Omitted. See PDF] [Image Omitted. See PDF] where the gradient can be derived using the chain rule for differentiation. More specifically, the partial derivatives and in equations (1) and (2) can be efficiently computed through a two-pass algorithm called backpropagation (Rumelhart, Hinton, and Williams (1986)). Backpropagation applies the chain-rule sequentially, iterating from the output layer to the input layer. Each neuron in the hidden layer receives and dispatches information only from and to neurons that are directly connected. For this reason, this process can be efficiently parallelized. When the backpropagation algorithm is applied to a single-layer neural network, it is known as the delta rule (Widrow and Hoff (1960)). One cycle through the full training samples is called a training epoch. In other words, completing a training epoch means that all training samples have had a chance to update the model parameters. Batch (or offline) learning builds the model digesting the entire training set at once, whereas online training allows the network to update the weights as new observations come in. The former is typically implemented by batch gradient descent, when the latter can typically handle larger training sets and is implemented by stochastic gradient descent. When the neural network weights are updated, the speed at which the model changes can be updated through the parameters in equations (1) and (2). The parameter is called the learning rate and it is similar in spirit to a dampening parameter. Intuitively, it represents how quickly the model “learns.” It can either be a constant (for batch learning) or optimized dynamically at each update by minimizing the error function.
Other aspects that can affect the fitting of the neural network are: (i) the initial weights, (ii) the problem of overfitting, (iii) inputs normalization, and (iv) the number of neurons. Initial neural network weights are chosen as near zero random values. Figure C.2 suggests that when the weight α is close to zero, the sigmoid approaches a linear function. This choice of initial weights allows the model to adapt to nonlinearities starting from the linear case.7 In practical terms, we solve the model by first initializing the neural network to a simplified version of the model. For example, before solving the neoclassical investment model as described in Algorithm 2, it is possible to solve the model analytically (in this particular case, by setting ), simulate an equilibrium sequence with the analytical solution, and use it to train the neural network. In a more complicated scenario, such as the optimal maturity management problem presented in this paper, we first solve the model without debt (i.e., the government only uses the income from taxes to finance government expenditure) to first initialize the neural network. The model should not overfit the data. Since stochastic simulation methods (such as PEA) only explore a subset of the ergodic set of state variables (i.e., those combinations of state variables simulated in the equilibrium), we optimize the model for out-of-sample predictions. We split the simulated data randomly in the training set (in-sample) and validation set (out-of-sample) with a 70–30 proportion, respectively.8 The number of epochs is determined by maximizing the neural network's performance on the validation set. All inputs are normalized to have mean zero and unitary standard deviation. This procedure ensures that all inputs have a comparable magnitude. If some inputs were of a bigger order of magnitude, the weights linked to those inputs would experience a faster update speed.9 This could potentially impair the learning process and lead to a slower convergence, or worse, mean squared prediction errors. The choice of the number of neurons in the hidden layer should be guided by the trade-off between in-sample fit and out-of-sample performance, as illustrated in Figure 1 (the figure refers to the neoclassical growth model that we present as an illustrative example in the next section). Increasing the number of hidden units tends to increase the in-sample fit but leads to overfitting. We select the number of units by minimizing the mean squared prediction error calculated on the validation set.
Figure 1. Root Mean Squared Error (RMSE) in function of the number of neurons in the hidden layer. Note: The figure shows the relation between the number of hidden units and neural network performance in the neoclassical growth model. Solid blue line—network performance on the training set. Dashed purple line—network performance on the validation set. Circles show network performance for a specific number of units. Lines represent the moving averages.
This section describes the implementation of the NN-based expectations algorithm applied to the neoclassical investment model. We use Matlab and we leverage on the Statistics and Machine Learning Toolbox. The illustrative example code, together with the comparison with other methods and the procedure that selects the optimal number of neurons, is publicly available.10 The purpose of using Matlab and disseminating this application is to facilitate the adoption of machine learning in economics with well-known tools in an easy-to-adopt package. In this example, we use a single layer neural network with 12 neurons (this number of neurons minimizes the mean squared prediction error out-of-sample as shown in Figure 1).
We first calculate the steady state, which is particularly useful to build a guess to initialize the neural network weights. The command feedforwardnet(12) creates a neural network with one hidden layer that contains 12 neurons. By default, this neural network is trained (through the function train) with Levenberg–Marquardt backpropagation, and has a maximum number of epochs set to 1000. We generate an initial data set using the deterministic steady state and substituting the value of the shock.
We then proceed to solve the model using the equivalent of Algorithm 1, except we use the neural network to approximate the expectation contained in the optimality conditions of the model. We call this the NN-based expectations algorithm, and a detailed description of the code is laid out in Algorithm 2.
Algorithm 2. NN-based expectations algorithm applied to the neoclassical growth model.
In a more complex environment with a large state space—where a stochastic simulation approach is desirable—the advantages of this method lie in the interaction between a satisfactory approximation of the model nonlinearities and the degree of multicollinearity among the simulated states. In PEA, the choice of polynomials is quite arbitrary as the policies' functional forms are ex ante unknown (one has to rely on an ex post accuracy test to make sure that the approximation is satisfactory). If the functional form of the chosen approximator cannot satisfactorily approximate the equilibrium policies, the presence of multicollinearity can lead to bias in the parameter estimates. The neural network does not suffer from this problem as it is a universal approximator. In the next section, we conclude the user guide illustrating this point.
Neural networks and multicollinearityOne general problem is that the functional form, not just the parameters, that links the state variables and the approximated terms is ex ante unknown. A standard practice is to make these approximations using polynomials with order and cross-terms typically chosen through trial and error. When the policies are correctly specified, multicollinearity leads to consistent, yet noisy parameter estimates. However, if the chosen functional forms are not suitable to approximate the true policy functions, multicollinearity can potentially lead to severely biased and less precise predictions, as we show in the following simple example.
For simplicity, imagine that we would like to approximate the policy function of the neoclassical investment model with two TFP shocks, full depreciation , and log-utility . The true functional form of the policy function for the capital choice (in this simple case it can be solved analytically) is [Image Omitted. See PDF] where the true parameters are , , and is a log-AR(1) process with persistence 0.8 and standard deviation of the innovation shock 0.0224. Moreover, is determined using the following formula: [Image Omitted. See PDF] where is a log-AR(1) process with persistence 0.8 and standard deviation of the innovation shock 0.0224.
We generate, through stochastic simulation, equilibrium sequences where , and is a vector that contains the three state variables. The objective is to use in order to infer the functional form of equation (3). When is generated by a stochastic simulation as in Algorithm 1, the matrix is often ill-conditioned. We simulate different degrees of multicollinearity by randomly generating sequences and with different degrees of correlation (i.e., different values of ), and we calculate the associated using equation (3). We evaluate the success of the prediction in function of different degrees of multicollinearity using a (i) linear polynomial and a (ii) neural network. Note that on purpose we incorrectly assume that the mapping between state variables and policy is linear .11 Also note that the neural network has a flexible non-parametric nature and, therefore, does not require making ex ante assumptions about the functional form of the policy functions. The success of the prediction is assessed using the mean squared prediction error (MSPE), which is the average prediction error at time t over many training samples. The error can be decomposed in bias and variance terms [Image Omitted. See PDF] Figure 2 reports the average MSPE for the entire validation set in function of the correlation between the two exogenous shocks and . Note that the higher the correlation, the higher the multicollinearity between and .
Figure 2. Mean squared prediction error with a neural network and a polynomial. Note: The figure shows the mean squared prediction error 1/n∑t=1n[yt−E(ŷt)]2 in function of the correlation between x2 and x3. Blue line with circles—NN, purple line with crosses—polynomial regression.
The higher the correlation between the state variables, the higher the inaccuracy of the polynomial regression model. Moreover, if we decompose the MSPE using equation (4), we find that most of the prediction error comes from the bias squared term as shown in Figures B.2 and B.3 in Appendix B.4. Because of its nonparametric nature, the neural network adapts to the shape of the function to approximate without having to guess the functional form ex ante. This experiment suggests that the nonparametric nature of a neural network is particularly handy in solving economic models characterized by policy functions with functional forms that are ex ante unknown and that potentially contain significant nonlinearities and whose domain presents multicollinear states.12 In the next section, we illustrate the use of neural networks in a model that contains such features.
Model and solution methodThe model we work with is an extension of the one-bond economy analyzed in Aiyagari et al. (2002) and extended to two bonds in Faraglia et al. (2019). We work with this model for two reasons. First, it is a difficult computational problem that features a large multicollinear state-space with nonlinearities difficult to approximate with a parametric approach (i.e., borrowing and lending constraints).13 Second, extending Faraglia et al. (2019) to more than two maturities is a relevant economic problem since it helps in determining the optimal maturity structure of government debt. We start by introducing the reader to a one-bond economy with a single maturity of N periods. We then present our methodology in a general model with N maturities. The numerical advantages of our methodology allow us to explore the optimal maturity structure of government debt with three bonds. Quantitative results are presented in Section 4.
Illustrative model: One-bond economyThe economy is populated by a representative household with preferences over consumption c and leisure l. The representative household chooses sequences of consumption and leisure to maximize its time-0 expected lifetime utility [Image Omitted. See PDF] subject to the budget constraint [Image Omitted. See PDF] where indicates an N-periods maturity bond and is its corresponding price.14 The only source of aggregate risk in the economy is an exogenous stream of government expenditures . In each period, the government can finance by: (i) levying a proportional labor tax and (ii) by issuing a nonstate contingent bond with maturity of N periods. Hence, the government's budget constraint is [Image Omitted. See PDF] The aggregate resource constraint of the economy is , where is the period's GDP. We assume the government can buy back and reissue the entire stock of outstanding debt in each period. The government sets taxes and issues debt to solve a Ramsey taxation problem. We adopt the primal approach and assume the government's ability to borrow and lend is bounded. Under these conditions, the government's problem is [Image Omitted. See PDF] subject to a sequence of measurability constraints15 [Image Omitted. See PDF] with borrowing and lending limits16 [Image Omitted. See PDF] The government's optimality conditions are [Image Omitted. See PDF] [Image Omitted. See PDF] [Image Omitted. See PDF] where is the Lagrange multiplier on the time t measurability constraint, and and are the Lagrange multipliers on the upper and the lower bounds, respectively. By issuing debt at time t, the government commits to increasing taxes and/or to reissuing debt at time . When the government sets taxes between time t and time , it needs to take into account its past actions in the form of all lags of the state variables up to N. More formally, the Ramsey planner's state space is [Image Omitted. See PDF] The state space contains variables, with many lags of the same state variable (e.g., μ), which tend to be highly correlated with each other. Moreover, equation (6) reveals that the Lagrange multiplier on the implementability constraint follows a random walk, creating an additional source of multicollinearity between the state variables. We solve the model with maturity , and we report in Figure 3 the autocorrelation function of the simulated equilibrium bond's sequence . It is clear that the previous 10 lags of the same variable, which are all part of the state space, are highly correlated with each other in the simulated sequence.
Figure 3. Autocorrelation function of the equilibrium bond sequence. Note: The figure shows the autocorrelation function of btN. The numbers are obtained after simulating the model equilibrium dynamics for T = 5000.
For this reason, the model is hardly solvable using PEA (Algorithm 1). In the literature, this problem has been tackled by an algorithm called condensed PEA. Condensed PEA proposes to approximate the expected values in equations ((5), (6), and (7)) using functions of a subset of the state space ( is also called the core set). These approximations are , and , where both the functions and the core set (including its cardinality) are ex ante unknown. The subset of the information set X is selected through an iterative procedure called condensed PEA. In essence, this method adds an additional loop to PEA and keeps extracting orthogonal components from the state space, similar to the Principle Component Analysis (PCA), but the number of factors does not have to be chosen ex ante. A more detailed description of the procedure can be found in Section 5, Algorithm 4, where we compare our methodology to existing ones in the literature. In the next section, we present our methodology in a model with N maturities. Due to the presence of multiple lagged bonds, the multicollinearity problem is further accentuated.
Optimal maturity management with N bondsThe economy is populated by a representative household with preferences over consumption c and leisure l. The representative household chooses sequences of consumption and leisure to maximize its time-0 expected lifetime utility: [Image Omitted. See PDF] subject to the budget constraint: [Image Omitted. See PDF] where indicates an i-periods maturity bond and is its corresponding price. The only source of aggregate risk in the economy is an exogenous stream of government expenditures . In each period, the government can finance by: (i) levying a proportional labor tax and (ii) by issuing nonstate contingent bonds with maturity . The government's budget constraint reads [Image Omitted. See PDF]
Sequential formulation of the Ramsey problemCombining the technology constraint, , with the household's labor optimality condition, , yields an expression for surplus [Image Omitted. See PDF] Substitute bonds prices , pinned down by the household's Euler equations, to get [Image Omitted. See PDF] with borrowing and lending limits17 [Image Omitted. See PDF] The optimality conditions are [Image Omitted. See PDF] where and are the Lagrange multipliers on the upper and the lower bounds, respectively, and and are the Lagrange multipliers on the upper and the lower bounds on the total bond portfolio. In the following section, we describe our computational strategy in detail. Details on the implementation and results using Epstein–Zin preferences can be found in Appendix A.
NN-based expectations algorithmIn this section, we describe the main algorithm, which is an extension of the basic idea illustrated in Section 2.4, applied to an optimal fiscal policy model with incomplete markets and multiple maturities. Here, we present the key steps, while implementation details can be found in Appendix B.1. There are N bonds available with maturities from 1 to N periods. The state space at time t is . The neural network needs to approximate , , and in function of . We model these relationships using one single-layer neural network . In particular, if the long maturity is , then the terms to approximate are [Image Omitted. See PDF] For example, in the two-bond case there are six terms to approximate and, if the short bond has 1 period maturity, they reduce to five.18 Given starting values for and and initial weights for , simulate a sequence of , and as follows:19
- As suggested by Maliar and Maliar (2003), we initially restrict the solution artificially within tight bounds on all debt instruments, and refine the solution gradually while we open the bounds slowly. These bounds are particularly important and initially need to be tight and open slowly, since the neural network at the beginning can only make accurate predictions around zero debt, that is, our initialization point. Additionally, we use penalty functions instead of the ξ-terms to avoid out of bound solutions.20 Since is identified by the first-order condition for , it is overidentified if the number of available maturities is greater than one: [Image Omitted. See PDF] We tackle this problem by using the forward-states approach described in Faraglia et al. (2019). This involves approximating the expected value terms at time with functions of the state variables that are relevant at instead of t and invoking the law of iterated expectations, such that we calculate instead of . This is done in two steps. First, we replace the terms in the optimality conditions with and, instead of approximating , , and , we use the information set to approximate , , and . Then we use Gaussian quadrature to calculate the conditional expectations of the neural network evaluated at .
- To perform the stochastic simulation, choose T big enough and find , and that solve the following system of equations: [Image Omitted. See PDF] The system of equations (8) contains multiple Lagrange multipliers (arising from the inequality constraints). This poses a significant computational challenge. Ideally, one would numerically solve the unconstrained model and then verify that the constraints do not bind and if, for example, binds, set and find the associated values for consumption and leisure. In a multiple-bond model, this is challenging because after setting , one needs to check if other constraints do not bind in the recomputed solution, and if they do, enforce them and recalculate the solution again, and so on. To overcome this challenge, we augment the objective function with the following differentiable penalty function: [Image Omitted. See PDF] where ϕ controls the severity of the penalty. More details can be found in Appendix B. The system of equations (8) can be rederived after including the aforementioned penalty function. We solve the system of equations (8) using the Levenberg–Marquardt algorithm. Since this is a local solver, there is no guarantee that the system is solved globally given a particular initial guess. In our implementation, we attempt to solve the system for at most maxrep number of different starting points. If the solution errors are below our specified threshold, the algorithm proceeds with the solution and moves to the next period t. If the solution errors are not below our specified threshold, we pick the solution with the lowest error.
- If the solution error in the stochastic simulation is large, or a reliable solution could not be found, the algorithm automatically restores the previous period neural network and performs the stochastic simulation with a reduced bound. More specifically, if an unreliable solution has been detected in iteration i, the algorithm restores the iteration 's environment and performs the stochastic simulation with [Image Omitted. See PDF]
- If the solution calculated shrinking the bound at iteration is still not satisfactory, the algorithm does not go back another iteration but uses the same neural network and tries to lower the again toward . Once a reliable solution is found, the algorithm proceeds to calculate the solution for iteration i again, but with [Image Omitted. See PDF] In this way, if an error is detected multiple times we guarantee that both and keep shrinking toward , and there should exist a point close enough to such that the system can be reliably solved with both and .
- If the solution found at iteration i is satisfactory, the neural network enters the learning phase supervised by the implied model dynamics, the bounds are increased, and a new iteration starts.
We repeat this procedure until the neural network predictions converge and the simulated sequences of and do not change.21 Algorithm 3 describes the algorithm in greater detail and Appendix B.1 contains more details.
Algorithm 3. NN-based expectations algorithm applied to optimal maturity management.
In this section, we exploit the computational gains that derive from the robustness to multicollinearity of the NN-based expectations algorithm to study the optimal maturity management problem of Section 3 with four maturities of 1, 5, 10, and 15 periods. Specifically, we are interested in the effects on policy and allocations arising from the additional hedging opportunities with respect to a portfolio with only a short and a long maturity. We first present the calibration and then our numerical results.
CalibrationWe calibrate the model following the strategy of Faraglia et al. (2019). Specifically, we use additively separable utility in consumption and leisure [Image Omitted. See PDF] with and , respectively. We calibrate χ such that households spend on average 2/3 of their time endowment on leisure in the steady state, which gives a value of 2.87.
We set β to 0.96 and for the sake of comparison, we follow the calibration strategy for from Faraglia et al. (2019). We assume that follows an AR(1) process , with equal to 0.95. Then we look for the value of such that government expenditure is on average equal to 25% of GDP. This gives a value of 0.0042. Lastly, we set the value for such that is always at least 15% and at most 35% of GDP in a simulated sample of ten thousand periods, which gives a value of 0.0031. Note that such parameterization is also broadly aligned with the estimates from the data.22
The government has four debt instruments at its disposal. We set maturities to 1, 5, 10, and 15 years and denote , , , as short, medium, and long and very long bonds, respectively. In addition to debt limits on individual bonds, we introduce a total debt limit of ±100% of GDP both in our benchmark model with only short and long bonds and in our calibration with four bonds. A fixed limit on total debt allows us to make a fair comparison and isolate the effects of the hedging benefits of the additional bond on the household's welfare. Table 1 summarizes the parameter values.
Table 1 Calibrated parameters.
Parameter |
Value | ||
Preferences |
Discount factor |
β |
0.96 |
Risk aversion |
γ |
1.5 | |
Labor disutility |
χ |
2.87 | |
Leisure curvature |
ηl |
1.8 | |
Government |
Average gt |
μg |
0.0042 |
Volatility of gt |
σg |
0.0031 | |
Autocorr. of gt |
ρg |
0.95 | |
Debt limits |
M̄, , M̄total, |
± 100% of GDP |
Before proceeding, it is worth noting that we tested our methodology with the two-bond case. Our results in a two-bond model confirms the findings of Angeletos (2002) and Faraglia et al. (2019), where the optimal debt portfolio includes a negative short bond position and a positive long bond position, as shown in Table 2. Moreover, as also shown in Table 2, the bond portfolio positions are large and volatile as in Buera and Nicolini (2004).
Table 2 Selected bond moments: means and variances.
Model | ||||
1 Bond |
0.017 |
- |
- |
- |
2 Bonds |
−0.03 |
- |
0.343 |
- |
3 Bonds |
−0.555 |
0.704 |
0.632 |
- |
4 Bonds |
−0.63 |
0.884 |
0.908 |
−0.173 |
1 Bond |
0.243 |
- |
- |
- |
2 Bonds |
0.1 |
- |
0.122 |
- |
3 Bonds |
0.591 |
0.34 |
0.533 |
- |
4 Bonds |
0.218 |
0.266 |
0.27 |
0.374 |
Note: The table shows the average outstanding debt for each maturity. Moreover, the table also reports the standard deviations of each outstanding position.
Optimal debt management with three and four bondsTables 2 and 3 summarize the equilibrium outstanding debt-to-GDP ratio for each maturity and for each model with an increasing number of bonds. Moments are calculated given a sequence of government expenditure shocks with persistence and volatility specified in Table 1.
Table 3 Selected bond moments: correlations.
Model | ||||||
1 Bond |
0.549 |
- |
- |
- | ||
2 Bonds |
0.707 |
- |
−0.482 |
- | ||
3 Bonds |
0.35 |
−0.181 |
−0.302 |
- | ||
4 Bonds |
0.762 |
−0.094 |
−0.212 |
−0.22 | ||
1 Bond |
- |
- |
- |
- |
- |
- |
2 Bonds |
- |
−0.796 |
- |
- |
- |
- |
3 Bonds |
−0.944 |
−0.985 |
0.931 |
- |
- |
- |
4 Bonds |
−0.458 |
−0.565 |
0.918 |
0.047 |
−0.877 |
−0.82 |
Note: The table shows the correlations between each maturity of outstanding debt and government expenditure. Moreover, the table also reports the cross-correlations among the bonds.
As shown in Tables 2 and 3, the optimal policy includes an active use of all available maturities. Table 2 shows that the average position of each maturity is significantly different from zero and that bond positions are volatile, suggesting their active use responding to expenditure shocks. Table 3 shows the correlations of all the maturities with government expenditure and among themselves. First, it shows the position of short maturity is positively correlated with expenditure shocks while the other maturities are negatively correlated. Second, short maturity is negatively correlated with all other maturities. These two together suggest that, in addition to holding a leveraged portfolio on average, it is optimal to rebalance the portfolio toward shorter maturities. As shown in Table 4, the additional hedging benefits of the additional maturities are reflected in a higher average leisure and a lower consumption volatility, while the economy sustains a lower average consumption. Labor tax volatility and autocorrelation also decrease significantly, while the average level rises.
Table 4 Allocations and policies.
Model |
σ(ln(ct)) |
σ(ln(lt)) |
ρ(ln(τt),ln(τt−1)) | ||||
1 Bond |
0.252 |
0.029 |
0.666 |
0.006 |
0.247 |
0.121 |
0.971 |
2 Bonds |
0.250 |
0.029 |
0.668 |
0.004 |
0.255 |
0.106 |
0.929 |
3 Bonds |
0.248 |
0.028 |
0.670 |
0.005 |
0.27 |
0.10 |
0.914 |
4 Bonds |
0.247 |
0.027 |
0.671 |
0.006 |
0.274 |
0.091 |
0.841 |
Note: The table shows the effects of the optimal policy on consumption and leisure as the number of bonds increases.
Next, we inspect the economic mechanism of how hedging benefits provided by the additional maturities affect household allocations and taxes. As known since Angeletos (2002), differences in long and short bond prices provide a tool to hedge against shocks by borrowing in long bonds and accumulating assets in the short term. Since long prices are more volatile than short prices, when a negative shock hits, the value of government liabilities falls more than the value of government assets, thus providing insurance against negative shocks. In addition to decreasing the government's liabilities, the differential response of long and short prices also affects the terms of issuing new debt. Since long prices fall more than shorter ones, it becomes cheaper for the planner to obtain funds by issuing shorter debt. This is why we observe portfolio rebalancing and a negative correlation between the long and short bonds.
Table 5 shows how optimal debt management affects government finances as we increase the number of debt instruments.
Table 5 Government income and borrowing.
Description |
Moment |
1 Bond |
2 Bonds |
3 Bonds |
4 Bonds |
Corr. Debt/GDP and gt |
0.547 |
0.136 |
−0.079 |
−0.131 | |
Corr. Net Financial Income and gt |
0.186 |
0.405 |
0.416 |
0.511 | |
Corr. Net Financial Income (constant price) and gt |
0.078 |
0.11 |
−0.103 |
0.019 | |
Av. Net Financial Income (%) |
−0.142 |
−0.845 |
−2.213 |
−2.569 | |
Av. Labor Tax Income (%) |
24.7 |
25.5 |
27.0 |
27.4 |
Note: The table shows selected moments from the models with one, two, three, and four maturities. The first row shows the correlation between the outstanding debt/GDP ratio and expenditure shocks. Rows two and three show the correlation between government financial income and expenditure shocks. The last two rows show the average net financial income and the average labor tax income. Net Financial Income is defined as the inflow from issuing new debt at the net of the cost of buying back the outstanding debt. Net Financial Income (constant price) is the counterfactual and corresponds to Net Financial Income holding bond prices fixed at their average values.
To inspect how this rebalancing matters for the government's budget, we decompose government income into labor tax income and net financial income, which is the inflow from issuing new bonds minus the outflow due to outstanding debt. Most importantly, as the number of maturities increases, the correlation between total debt and government expenditures changes sign, as shown in the first row of Table 5.
In the one- and two-bond economy, the government borrows from the private sector to finance expenditure shocks. In the three- and four-bond economy, the government reduces its total debt to subsidize the private sector and smooth its consumption. At the same time, net financial income becomes even more positively correlated with and allows for smoother labor taxes, despite a falling total debt in bad times. The reduction of total debt together with rising financial income is achieved precisely because the planner holds leveraged positions and responds to expenditure shocks by substituting to short bonds.
As further evidence of this mechanism, we construct a counterfactual measure of net financial income assuming that bond prices were fixed at their mean values. The counterfactual correlation is reported in the third row of Table 5. The low correlation here suggests that the comovement between net financial income and government expenditures is achieved by exploiting the differential response of short, medium, and long prices. This indicates that if prices were constant, portfolio rebalancing would have little effect on the cyclicality of financial income and the government's budget.
Looking at the averages in rows four and five, we see that as the number of maturities increases, the government becomes a net payer to the private sector and collects a larger share of its income in labor taxes. This happens because the increase in labor taxes outweights the decrease in average labor supply. Although average household labor income falls, the household is compensated for holding government debt.
Comparison with alternative methodsThere are other simulation-based numerical methods designed to address the issue of multicollinearity among state variables. In this section, we discuss and compare our method to the two most prominent ones: the Condensed PEA (C. PEA) used in Faraglia et al. (2019) and the generalized stochastic simulation algorithm (GSSA) described in Judd, Maliar, and Maliar (2011).
Relation to condensed PEAThis method extracts orthogonal components from the information set. The method is similar to the Principle Component Analysis (PCA), except that the number of factors does not have to be chosen ex ante. Algorithm 4 reports the pseudocode of condensed PEA and highlights with colors the part of condensed PEA that changes when the NN-based Expectations Algorithm (NN EA) is implemented. In this section, we solve the model with a short one-period bond and a long ten-period bond using both algorithms. The model is solved with individual bonds bounds M̄, set at ±100%. As reported in Table 6, the two algorithms reach a similar outcome, but the NN-based expectations algorithm is significantly faster. The speed gains mainly come from the removal of the external loop (lines 1, 13, 14, 15, 16, and 17 in condensed PEA) as the neural network digests the information set at once.
Table 6 Moments condensed PEA versus NN EA.
Method |
Time |
Forecast Error |
σ(ln(ct)) |
σ(ln(lt)) |
σ(ln(τt)) | ||||
C. PEA |
203,810 s |
0.373 |
0.256 |
0.663 |
0.219 |
0.024 |
0.007 |
0.116 |
0.798 |
NN EA |
23,744 s |
0.391 |
0.256 |
0.664 |
0.222 |
0.024 |
0.007 |
0.123 |
0.869 |
Note: The table shows the equilibrium moments calculated with condensed PEA and the NN Expectations Algorithm (NN EA). The forecast error is calculated as , where here is used to indicate the realized value correspondent to each expectation predicted by either condensed PEA or NN EA. Bond bounds are set as M̄, at ±100%. We follow Faraglia et al. (2019) and calculate as the ratio between the market value of short-term debt and the market value of the total outstanding debt . The computation times are obtained using MATLAB 2023a and a computer with an Intel(R) Core(TM) i7-8750H CPU (9 M Cache, up to 4.10 GHz) with RAM 16 GB.
On the one hand, eliminating the external loop (line 1) reduces the complexity of the algorithm significantly, since condensed PEA requires testing an unknown number of combinations of core regressors. On the other hand, our algorithm requires substituting OLS (lines 10 and 11) with a neural network training algorithm, which has higher complexity. Note that the computation time of one entire simulation from 1 to T (lines 3–9) takes significantly more time under condensed PEA.23 In total, the condensed PEA needed to cycle 4 times before the core set converged. This means that condensed PEA required approximately four times more iterations than NN EA.
Note that, as the number of maturities increases, the condensed PEA requires testing a much higher number of combinations of core regressors. In this sense, NN EA is a more scalable approach. On a related note, in Appendix B.2, we also present a comparison between condensed PEA and our algorithm based on time complexity. Our calculations show that NN EA has a lower time complexity than condensed PEA, if condensed PEA requires more than one loop on the core set of state variables to converge.
Relation to GSSANext, we compare our methodology to the GSSA method proposed by Judd, Maliar, and Maliar (2011). GSSA resolves the multicollinearity problem using standard econometric techniques (i.e., single value decomposition, principal components, and ridge regression), combined with stochastic simulation. We solve the government debt management problem with one maturity using GSSA.24 In particular, we use ridge regression since, as noted in Judd, Maliar, and Maliar (2011), it works best under severe multicollinearity.25 In our application, ridge regression combined with stochastic simulation works when bond constraints are loose, whereas when the constraints are tight, ridge regression coefficients fail to converge. As we illustrate in details in Appendix B.3, the reason lies in the fact that ridge regression requires choosing penalty parameters. This choice presents nontrivial challenges. In our application, the simulated data changes in each iteration since we are solving the model with an iterative procedure. In principle, this should require updating the penalty parameters in each iteration. However, in our experience, changing penalty parameters at each iteration also creates instability as the simulated data is endogenous to the penalty choice in the previous iterations. For this reason, we fix the penalty parameters (see Appendix B.3 for more details on how we choose the penalty parameters) during the whole procedure and, since the simulated debt sequence tends to change significantly with each iteration when the bond constraints are tight, the algorithm fails to converge. Recall that, as explained in Section 3.3, our algorithm initially restricts the solution artificially within tight bounds on all debt instruments, and refines the solution gradually while it opens the bounds slowly. To conclude, the challenges in choosing (or fixing) the penalty parameters, combined with the frequent changes in debt sequences in each iteration induced by initially tight bounds, render ridge regression with stochastic simulation hard to scale in our application. Further details can be found in Appendix B.3.
ConclusionIn this paper, we exploit the computational gains that derive from the robustness to multicollinearity of neural networks to extend the optimal debt management problem studied by Faraglia et al. (2019) to four maturities. The hedging benefits provided by the additional maturities allow the government to respond to positive expenditure shocks by raising financial income without increasing the total outstanding debt. We show that, through this mechanism, the government uses the additional maturities to effectively subsidize the private sector in recessions, resulting in more leisure and less volatile labor taxes.
The state space includes lagged values of the same variables (e.g., lagged values of outstanding bonds and Lagrange multipliers). Multicollinearity in the state space might prevent standard regression-based algorithms from converging because the estimated regression coefficients may never stabilize due to high estimation variance and because misspecification of the true policy function under multicollinearity may lead to severe prediction bias, as we show in Section 2.5. Alternatively, people have used the stochastic simulation based on regularization (see Judd, Maliar, and Maliar (2011)) or have extended the PEA algorithm to condensed PEA; see Faraglia et al. (2019). In Section 5, we discuss how the NN-based expectations algorithm improves upon these methods.
Bhandari, Evans, Golosov, and Sargent (2017b) propose a method that allows one to approximate a system around a current level of government debt, and Lustig, Sleet, and Yeltekin (2008) on the other hand, solve the optimal fiscal policy problem in incomplete markets with seven maturities up to 7 periods using value function iteration on a sparse grid.
Aiyagari et al. (2002), Angeletos (2002), Buera and Nicolini (2004), Lustig, Sleet, and Yeltekin (2008), Faraglia et al. (2019), Bhandari et al. (2017b), and Bigio, Nuño, and Passadore (2023).
For a general introduction to machine learning, the reader can refer to Hastie, Tibshirani, and Friedman (2009). For a course tailored to economists, the reader can refer to the lecture notes by Jesús Fernández-Villaverde available here:
Let be the vector of eigenvalues of the matrix , such that . Ill-conditioning refers to the fact that the ratio is large, implying the matrix is close to being singular.
Appendix C contains details about the neural network structure used in this section.
Substantial research effort has been put into choosing the initial weights depending on the specific neural network architecture (e.g., see Glorot and Bengio (2010) for deep neural networks).
We did not find a significant difference in allocations and forecast errors when we changed the training set to be 50% and 90% of the sample.
In the context of deep neural networks, the distribution of each layer's inputs varies during the training phase, since the weights of the previous layers change as well. Typically, this requires adopting lower learning rates and carefully choosing initial parameters. This problem is known as internal covariate shift. In this context, batch normalization can achieve higher a learning rate (see Ioffe and Szegedy (2015)) by reducing the problem of internal covariate shift.
Downloadable from
We purposely choose a polynomial that cannot correctly approximate the true policy functions, since often the true functional form is ex ante unknown. We check that the results are robust to many types of misspecification. However, the purpose of the following example is simply to illustrate the possibility that misspecification under multicollineatity can lead to biased predictions. This problem would not arise with a universal approximator, such as neural networks, because they do not require prespecification of the functional form.
Note that another option would be to specify a rich polynomial structure with many higher order and cross-terms. One problem with such approach is that higher-order terms of the same variable are extremely multicollinear.
The nonparametric nature of a neural network makes it suited to approximating policy functions with strong nonlinearities, which would be harder to capture with polynomials.
In principle, households are able to trade government securities in the secondary market. However, since we assume households are identical, there is no trade in equilibrium and, for ease of notation, we omit these trades from the household's budget constraint.
See Aiyagari et al. (2002) for details on how to use the recursive Lagrangian approach in this context.
is the government saving constraint, which is equivalent to a household's borrowing constraint.
is the government saving constraint, which is equivalent to a household's borrowing constraint.
Use S and N to denote short- and long-bond maturities, respectively. The six terms are , , , , , and . The term that does not require approximation in the latter case is , which becomes just when .
The network can be initially trained using an educated guess for , , . It is important that the initial training sequence is not constant. More details can be found in Appendix B.1.
We also find that including ξ terms explicitly in the training set improves prediction accuracy. More details can be found in Appendix B.1.
There is no need to check , which can be backed out analytically from the first-order condition for .
We obtain very similar estimates using the sum of government consumption and gross investment from the NIPA tables.
Lines 3–9 of Algorithm 4 take on average 34 s for condensed PEA, whereas the neural network training phase took around 15 s. Lines 13–16 of Algorithm 4 take on average 0.003 s for condensed PEA, whereas the neural network training phase took around 0.21 s.
We also attempted to solve the model with two maturities but we were not successful.
Our application features a very ill-conditioned matrix , especially when the number of maturities increases.
The replication package for this paper is available atYou have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is published under http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
We use supervised machine learning to approximate the expectations typically contained in the optimality conditions of an economic model in the spirit of the parameterized expectations algorithm (PEA) with stochastic simulation. When the set of state variables is generated by a stochastic simulation, it is likely to suffer from multicollinearity. We show that a neural network-based expectations algorithm can deal efficiently with multicollinearity by extending the optimal debt management problem studied by Faraglia, Marcet, Oikonomou, and Scott (2019) to four maturities. We find that the optimal policy prescribes an active role for the newly added medium-term maturities, enabling the planner to raise financial income without increasing its total borrowing in response to expenditure shocks. Through this mechanism, the government effectively subsidizes the private sector during recessions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 School of Economics, University of Surrey
2 Economic Research Department, Federal Reserve Bank of Chicago