I've noticed that total savings analysis using Monte Carlo puts the fixed-rate squarely in the 50/50 range of outcomes. But when I switch to historic results, using 1965 as a start, the fixed rate outcome falls to the bottom decile in 40 years.
I've long had a feeling that humans are good problem solvers. That the larger the population of earth gets, the larger the absolute number of geniuses exist, and by extension, the better we get (as a species) at solving our biggest problems.
I'm the first to admit my confirmation bias. Will someone smarter than me look at the attached PNG demonstrating the phenomenon and tell me what I'm missing?
@retiremesweetly The randomized returns used in the Monte Carlo analysis are based on the mean rates of return and standard deviations that you specify for the assets in your portfolio, so that tends to generate results with the deterministic projection at roughly the 50th percentile. In contrast, historical analysis totally disregards the mean rates of return you've specified and, instead, uses historical rates of return. This can cause historical analaysis results that are substantially better or worse than the deterministic projection. Incidentally, your picture does not show historical analysis results using a starting year of 1965; you're showing the results of the historical analysis which uses all historical starting years in the database and the produces an aggregate result based on percentiles. You have to enable the historical sequence analysis to see the results produced from a specific starting year, and when you do, that result will be shown as a yellow line. A starting year of 1965 is a particularly bad starting year and typically produces worst case long term results. Does this make sense?
Stuart