In the first post, we looked at what makes a good model. A good model will point to issues and risks associated with a planned project and allow management to mitigate risk or otherwise improve the outcome by altering the size, scope, or method of proposed operations. Bias can sometimes cause us to ignore or fail to see conflicting issues. Forecasts are prepared to evaluate and support a decision, not justify an already made decision. https://wordpress.com/post/connerhealthcare.com/128
I see essentially three types of bias. First, learned bias, something that we all have. This bias is developed from our work experience, mentors, and a host of other factors. The more exposure to a broad range of issues and organizations, the likelihood of bias is diminished but not eliminated. Second, institutional bias or what I call “build it and they will come” approach. We see things only from the inside of the institution. In the 1970’s, the U.S. automakers suffered from some institutional bias. Finally, ego and alter ego bias work to make things appear better or worse than they are. These are equally dangerous, one leading to overdoing and the other to missing opportunity. In each of these three types, we subconsciously or consciously adopt assumptions.
A forecast can be corrupted by rationalized assumptions that arise out of bias or making it look like we want it to look. We believe it will work therefore, we forecast it to work. Proponents or opponents will push for more aggressive or more conservative assumptions in the forecast. In many organizations, the modeler is likely not the final decision maker but rather someone good with Excel or other tools. The modeler may or may not have the depth to understand and evaluate the input and assumptions.
The challenge is to evaluate assumptions constantly and objectively. I once had a partner, who because of his role in the firm saw a lot of businesses looking for investment in the form of venture capital or other funding. He became known as the resident cynic. He correctly reasoned that as good as the forecast looked, the business was putting the best spin on the opportunity but would not grow, capture market share, or control expense grow as forecasted. There should be someone in the decision-making process who serves as the resident cynic guarding against bias. Sure, there are good reasons to put forth the best-case scenario, but don’t be trapped into thinking that it is also the most likely case.
I want to summarize several experiences with models that I have experienced. In each case, there were bias or mistakes of various forms. The facts have been simplified and you may notice an exaggeration or two to make my point.
Case Study #1 – Don’t work back from the answer you want –
I did several projects for a hospital group in preparing CONs. In each case, we challenged and reconciled the internal models. The client opened their staff and records to us to build a defensible model for CON applications in greater detail. Then, the core management team set out to challenge or test our assumptions on payment, staffing, and other. It was not until we were engaged in a contested CON that we fully appreciated the importance of challenges and pushback from the full team on correctness of the assumptions used in a very public model.
The competing application was slightly smaller in size, in the same market and had a similar payor mix and case mix. Following the opposition’s assumptions on volume and payor mix, we recreated an opposition model consistent with our client’s assumptions on payment rates and expenses. The expected revenue required staffing and variable expenses were estimable on the same basis.
What we found was that the opposition had overstated the rate of payment for Medicare. They had also included fewer hours of care particularly during the start-up phase. The two issues materially overstated the feasibility of the opposition’s smaller application. Whether bias or mistakes in the model by the opposing party, our client won at the administrative level and ultimately prevailed in all the appeals.
Case Study #2 – Know your market –
In a dispute over the value of a joint venture, the buying partner presented a forecast with limited or no growth, claiming that the county in which the subject of the valuation was located had static or no growth. While true for the county, it did not reflect the fact that the subject of the valuation was in the fastest growing, most affluent, and most economically viable zip code, with a clear set of travel patterns that attracted patients away from the downtown medical hubs. Further, the subject was contiguous and closely aligned with the two counties in the MSA with the high growth rates and the high employment percentages. Ultimately, the parties found a compromised value through negotiation of a revised growth rate.
Case Study #3 – You are going to be paid how much?
A model that was originally based on historical revenue resulted in a proposed high sales price. The owners assumed that the current higher revenue would last forever. However, they did not consider a significant reduction in payment by Medicare that would take effect in the next year. A lower payment without changes in volume or expenses meant that the entire reduction lowered net income. It was never clear whether there was a bias, or the owner and preparer did not understand pending reimbursement changes.
Case Study #4 – Don’t be wed to historical models –
Over the years, I have assisted in ability to pay settlements on behalf of clients working with the Department of Justice. Part of the submission request from DOJ is always for, not only the current projections, but also recent forecasts or projections. Doing so, meant insuring that the current model presented the most likely outcome given the current environment. We also explained each of the key assumptions to the DOJ. Including an explanation was a reconciliation or identification of differences in the historical models, including challenges to previous assumptions or bias.
Case Study #5 – Competing bias – Dig for the right facts and assumptions to build consensus.
There were opposing schools of thought within a tertiary facility was considering a helicopter service. At the time, there was only limited competition and a clear need. They understood the basics – helicopters were expensive and not well paid, so to be financially feasible the helicopter would have reach certain volumes of new paying patients that were going to other markets. One group was skeptical as to how often the helicopter would be needed and to what extent? Would it result in new patients versus simply transporting our existing patients? The other group is best described as overzealous, discussing the ‘ripple’ effect on other referrals. So how to evaluate? The pro and con groups had to work together, challenging each other every step of the way. In doing so, they focused on the major assumptions by:
- Establishing the diagnosis and procedures that qualify for helicopter transport and apply it to the existing patient population to set a baseline of existing patients.
- Setting, based on county of origin and history of referral, a range of potential transports considering time, distance and weather.
- Measuring the cost of the helicopter itself as largely fixed – 24/7 staffing, communications, ground support, maintenance, capital cost of the helicopter and the hanger.
- Challenging estimates of payment and expense of a small group of high-cost patients. A lower contribution margin was used to adjust for the risk.
For more information –
Ken Conner
Conner Healthcare Group