Would anyone else appreciate Pralana adding correlations across asset classes and auto-correlations across time for the key market forecast factors in the forecasts?? Inflation and Returns from Domestic stocks, International Stocks, Bonds, etc
Lately I've been struggling with the shortcomings of all consumer retirement planning SW in that they fail to account for these correlations in their forecasts; which makes it very hard to trust the Monte Carlo analyses.
Creating a Monte Carlo simulation that accounts for both cross-correlation (how inflation, stocks, and bonds move together) and auto-correlation (how this year’s inflation affects next year’s inflation) would involve Cholesky Decomposition for the cross-correlations and Vector Autoregression (VAR) for the auto-correlations.
We can do the market forecasting in Excel, so I assume it could be done in Pralana, but there's no way to drop 1,000 sample market forecasts into Pralana and have it analyze how the plan would perform across those potential futures.
Data on the correlations between the key market factors is publicly published annually from reputable financial management firms (example attached), and the major auto-correlations are published in numerous academic studies for Inflation, S&P 500, and probably others.
As you might guess by my comment history, I too would be open to a more sophisticated version of parametric monte carlo using covariances. Even a simple correlation matrix with stocks-bonds (e.g. 0.15) would be an improvement.
But I would add a caveat: only if the current monte carlo is grossly wrong and leads to misleading probability of success. Is it?
You sure went from zero to 100, though, with Cholesky Decomposition for the cross-correlations and Vector Autoregression (VAR). I'm not sure all that would be needed or practical for a consumer product. (Exception: Maxifi uses those.)
I personally would favor a bootstrap monte carlo over a complex parametric like you describe, in part because its results reflect those covariances and autocorrelations without needing complicated python modules or the math behind them. It achieves the same ends with a much simple implementation.
A related thought I've been wondering about: Why is the (bright red) deterministic wealth line on the monte carlo always below the median of the Monte Carlo bands? In theory the median terminal monte carlo wealth should be almost identical to the deterministic terminal wealth, given they use the very same asset CAGRs. Yet it seems to end lower by varying amounts even if you have "assets correlated" checked. My hunch is Pralana is ignoring volatility with the deterministic line and simply doing weighted averages of the component asset (CAGR) returns, rather than converting those asset CAGRs to averages first, averaging those, then converting back to a portfolio CAGR. (In other words, ignoring the rebalancing bonus.) The terminal wealth of the dark red and the middle of the bands could be made identical by using a portfolio CAGR based on portolio SD--that is, working in volatility into the portfolio average and into the deterministic portfolio SD (which would result in a slightly higher growth rate than the arithmetic weighted average of the assets). But would we want that? Is anyone else bothered by the red line being lower?
Merry Christmas, @jkandell !
I would love to have a bootstrapped Monte Carlo IF we were able to apply +/- biases to the historical data. (e.g., If I expect stock market returns in the future to be lower than the past, then I could apply a downward bias on all Bootstap Samples. Or I could bias up or down inflation based on my hypotheses).
Regarding whether the correlations actually matter; Toggling the "Use Correlated RoRs" swith in Monte Carlo mode swings the Chance of Success for a representative Actuarial plan of mine from 73% (Correlated) to 77% (Uncorrelated). I could be okay with just assuming that the reality will be somewhere in that range (since my assets are all positively correlated), but I suspect that auto-correlations (mean reversion for stock returns and persistence for inflation) would have a larger impact, especially from extended periods of high inflation.
I share your concern regarding the divergence between the deterministic Total Savings path and the median case of the Monte Carlo analysis paths. FWIW, I don't see the same divergence in the Consumption Smoothing withdrawal strategy (where deterministic tracks median Monte Carlo Path), and the deterministic path of the Consumption Smoothing path also tracks the median Monte Carlo path for the Actuarial withdrawal strategy.
I discussed my concern that the Actuarial deterministic path doesn't track the median Monte Carlo path with Stuart and Charlie about a year ago, and my impression is that Charlie doesn't agree that it should.
Personally I see three pain points in the current implementation of the Actuarial withdrawal strategy:
1) The Pralana implementation allows withdrawal percentages in final years to explode, reaching ~50% the year before the final year the plan ends and ~100% in the final year of plan. This approach naturally causes the Chance of Success failure rate to trend towards 50%, since there's a ~50% chance that the market returns in the final years will be lower than expected. To fix this problem, the Bogleheads community caps annual withdrawals at 10% in their implementation of the actuarial withdrawal strategy (VPW - Variable Percentage Withdrawal), and that's what I'd like to see Pralana do as well.
2) In Monte Carlo mode, the Pralana implementation of the Actuarial withdrawal strategy calculates each year's allowed withdrawals by assuming that the long-term average future market returns will be equal to the prior-year's actual market returns. The large swings in prior-year returns, easily +25% or -25% in Monte Carlo simulations, can result in unreasonably large withdrawals in the following year, which depletes the portfolio much too quickly. My suggestion was to have Pralana calculate what the allowed withdrawals would be each year by assuming that future average returns would be equal to the user-provided estimates of future market returns, but I don't think Charlie liked that because that would give the simulated user in each Monte Carlo trial perfect future knowledge of what the actual future distributions of market returns could be, which of course isn't true in the real world.
3) In Monte Carlo mode, when determining what spending level is allowable each year the Pralana implementation of the Actuarial withdrawal strategy uses prior year taxes as the estimate for current year taxes, with guardrails on changing prior years variable spending by at most +/- a factor of 2x. This guess-timate of current year taxes can experience big swings in the early years of the plan as we roll off employment and ramp up/down Roth conversions. The resulting over-spending in some early years of the plan can deplete the portfolio too quickly, which gives a downward bias to the median Monte Carlo path or total savings after a few years. As a result, the Actuarial withdrawal strategy spends the rest of the plan "catching up" to the deterministic plan as it responds to the lower total savings in later years by spending less.
I would love to have a bootstrapped Monte Carlo IF we were able to apply +/- biases to the historical data. (e.g., If I expect stock market returns in the future to be lower than the past, then I could apply a downward bias on all Bootstap Samples. Or I could bias up or down inflation based on my hypotheses).
We're two birds of the same feather. It turns out the math to do the first part is quite simple. If you want to end up with a bootstrap of average return R and Standard Deviation S, you'd simple scale the bootstrap samples/draws:
Bootstrap_sample_scaled = R+(bootstramp_sample_original - r) * S / s,
where R is your target mean return, r is your original population mean return, S is your target standard deviation and s is the standard deviation of the original population. This effectively centers your sample at 0, shrinks or expands it enough to generate standard deviation S, then shifts the entire distribution so its new mean is R.
You would need to use block samples to capture auto-regression and covariances, somewhere between 5-10 year blocks might be ideal.
I think a non-scaled bootstrap would also be interesting, to generalize history. But from communication, I don't think either of these is in line with Pralana's overall vision.
Thank you @jkandell
I agree that scaling the bootstrap samples to achieve our target means and standard deviations would be better than simply shifting them.
Ah well, we can dream at least!
Projection Lab recently implemented block bootstrap historical simulations, and the user gets to select the sample periods with a default of 5 years.
On behalf of, I believe, the majority of Pralana users, I too have no idea what @boston-spam-02101gmail-com and @jkandell are talking about 😜.
@boston-spam-02101gmail-com Cool. Can the user scale it though (user defined return and sd)? The only scaled bootstrap I know of is TPAW, though it is a bootstrap of actuarial consumption not total wealth.
@jkandell No, Projection Lab does not currently support either scaling or shifting, just straight samples from the historical record.
On behalf of, I believe, the majority of Pralana users, I too have no idea what @boston-spam-02101gmail-com and @jkandell are talking about 😜.
The summary of this thread thus far (without math) is this:
1) Kevin asked in his original post if anyone would want a more sophisticated monte carlo (that takes into account asset correlations, regression to mean, etc). He listed a bunch of fancy math methods that do that. I responded: only if it makes a real difference in what we have now. (I'm not convinced it does. I'm a big believer in 'good enough is good enough'.)
2) Then we nerded out discussing two methods of doing monte carlo: bootstrapping (predicting via randomly re-arranging real life historical samples) vs parametric monte carlo (predicting via random generation of mathematical probability distributions). But it's moot cause Charlie and Stuart don't feel bootstrapping would be useful to the average Pralana user.
3) Then the discussion moved to whether it was a problem the "red line" (summary of the review>tabular>balance sheet) in the monte carlo graph appeared to often be below the middle of the monte carlo bands. Kevin and I both thought it might indicate a problem (since in theory the deterministic should be the middle of the bands), but he noted it doesn't appear to happen in all the withdrawal methods.
4) Kevin offered issues with Pralana's version of the actuarial method.
I was just having a little fun 😘. But it does bring up a question, in my mind anyway, what do the majority of Pralana users want out of Pralana Retirement Calculator (PRC)? Long term Retirement calculator without going into the weeds, financial calculator, year to year in-depth analysis, some of each? Several people, including myself, tend to nerd out on this forum. I just hope that the level of nurditude it does not turn some people away, it is a fantastic tool no matter how you use it.
I share your concern regarding the divergence between the deterministic Total Savings path and the median case of the Monte Carlo analysis paths. FWIW, I don't see the same divergence in the Consumption Smoothing withdrawal strategy (where deterministic tracks median Monte Carlo Path).
I was referring to the "specified expenses only" red line above. I hadn't thought to check other withdrawal methods. On a quick test of a consumption smoothed monte carlo, for me it shows that the end pt of the red deterministic line is considerably higher than the median monte carlo end pt. I have no idea why we are getting different results. I'd never done a consumption smoothed monte carlo; I'd always switched back to specified expenses plugging in my consumption smoothed result.
Personally I see three pain points in the current implementation of the Actuarial withdrawal strategy...
Kevin, I'm going to reply to the rest of this post by creating a different thread about pralana's actuarial method.
I was just having a little fun 😘. But it does bring up a question, in my mind anyway, what do the majority of Pralana users want out of Pralana Retirement Calculator (PRC)? Long term Retirement calculator without going into the weeds, financial calculator, year to year in-depth analysis, some of each? Several people, including myself, tend to nerd out on this forum. I just hope that the level of nurditude it does not turn some people away, it is a fantastic tool no matter how you use it.
I think you know i agree with you. That’s why i argued that pralana's existing Monte Carlo is “good enough”. But that’s in part because, knowing its limits, i don’t take its “probability of success” very seriously. (I worry the average user does.) The simplest advice is: the 25th-75th bands (the darker ones) are roughly how things might go given your return estimates—which is probably all the average user needs.
I do think it’s interesting that boldin and orojection lab and maxifi all incorporated some of the things kevin and i discussed. They must feel their users needed and wanted that sophistication. But then boldin has had a backlash; so who knows.
Personally, I'm in favor of adding as much functionality as possible (Ex: correlations and regression). However, I'm not sure how much sense this makes for consumer level software. Based on what I see on Reddit forums, I think a lot of people don't even know there is a volatility assumption or what it is. I would be curious to know how many Pralana users code their own volatility assumptions.
My wish would be for more flexibility around the scenario results. First, allow the user to specify the number of scenarios. When I'm just messing around, I don't want to wait for 1000 when 50 or 100 would be "good enough". Second, make details from the scenario runs available. Maybe keep the random numbers, rank the scenario results by something (net worth?) and allow reports to be generated for a specified scenario. Looking at the bad scenarios can be very helpful.