Bias – the Enemy of a Good Forecast

In the first post, we looked at what makes a good model. A good model will point to issues and risks associated with a planned project and allow management to mitigate risk or otherwise improve the outcome by altering the size, scope, or method of proposed operations. Bias can sometimes cause us to ignore or fail to see conflicting issues. Forecasts are prepared to evaluate and support a decision, not justify an already made decision.  https://wordpress.com/post/connerhealthcare.com/128

I see essentially three types of bias. First, learned bias, something that we all have. This bias is developed from our work experience, mentors, and a host of other factors. The more exposure to a broad range of issues and organizations, the likelihood of bias is diminished but not eliminated. Second, institutional bias or what I call “build it and they will come” approach. We see things only from the inside of the institution. In the 1970’s, the U.S. automakers suffered from some institutional bias. Finally, ego and alter ego bias work to make things appear better or worse than they are. These are equally dangerous, one leading to overdoing and the other to missing opportunity. In each of these three types, we subconsciously or consciously adopt assumptions.

A forecast can be corrupted by rationalized assumptions that arise out of bias or making it look like we want it to look. We believe it will work therefore, we forecast it to work.  Proponents or opponents will push for more aggressive or more conservative assumptions in the forecast. In many organizations, the modeler is likely not the final decision maker but rather someone good with Excel or other tools. The modeler may or may not have the depth to understand and evaluate the input and assumptions.

The challenge is to evaluate assumptions constantly and objectively. I once had a partner, who because of his role in the firm saw a lot of businesses looking for investment in the form of venture capital or other funding. He became known as the resident cynic. He correctly reasoned that as good as the forecast looked, the business was putting the best spin on the opportunity but would not grow, capture market share, or control expense grow as forecasted. There should be someone in the decision-making process who serves as the resident cynic guarding against bias. Sure, there are good reasons to put forth the best-case scenario, but don’t be trapped into thinking that it is also the most likely case.

I want to summarize several experiences with models that I have experienced. In each case, there were bias or mistakes of various forms. The facts have been simplified and you may notice an exaggeration or two to make my point.

Case Study #1 – Don’t work back from the answer you want –

I did several projects for a hospital group in preparing CONs. In each case, we challenged and reconciled the internal models. The client opened their staff and records to us to build a defensible model for CON applications in greater detail. Then, the core management team set out to challenge or test our assumptions on payment, staffing, and other. It was not until we were engaged in a contested CON that we fully appreciated the importance of challenges and pushback from the full team on correctness of the assumptions used in a very public model.

The competing application was slightly smaller in size, in the same market and had a similar payor mix and case mix. Following the opposition’s assumptions on volume and payor mix, we recreated an opposition model consistent with our client’s assumptions on payment rates and expenses. The expected revenue required staffing and variable expenses were estimable on the same basis.

What we found was that the opposition had overstated the rate of payment for Medicare. They had also included fewer hours of care particularly during the start-up phase. The two issues materially overstated the feasibility of the opposition’s smaller application. Whether bias or mistakes in the model by the opposing party, our client won at the administrative level and ultimately prevailed in all the appeals.

Case Study #2 – Know your market –

In a dispute over the value of a joint venture, the buying partner presented a forecast with limited or no growth, claiming that the county in which the subject of the valuation was located had static or no growth. While true for the county, it did not reflect the fact that the subject of the valuation was in the fastest growing, most affluent, and most economically viable zip code, with a clear set of travel patterns that attracted patients away from the downtown medical hubs. Further, the subject was contiguous and closely aligned with the two counties in the MSA with the high growth rates and the high employment percentages. Ultimately, the parties found a compromised value through negotiation of a revised growth rate.

Case Study #3 – You are going to be paid how much?

A model that was originally based on historical revenue resulted in a proposed high sales price. The owners assumed that the current higher revenue would last forever. However, they did not consider a significant reduction in payment by Medicare that would take effect in the next year. A lower payment without changes in volume or expenses meant that the entire reduction lowered net income. It was never clear whether there was a bias, or the owner and preparer did not understand pending reimbursement changes.

Case Study #4 – Don’t be wed to historical models –

Over the years, I have assisted in ability to pay settlements on behalf of clients working with the Department of Justice. Part of the submission request from DOJ is always for, not only the current projections, but also recent forecasts or projections. Doing so, meant insuring that the current model presented the most likely outcome given the current environment. We also explained each of the key assumptions to the DOJ. Including an explanation was a reconciliation or identification of differences in the historical models, including challenges to previous assumptions or bias.

Case Study #5 – Competing bias – Dig for the right facts and assumptions to build consensus.

There were opposing schools of thought within a tertiary facility was considering a helicopter service. At the time, there was only limited competition and a clear need. They understood the basics – helicopters were expensive and not well paid, so to be financially feasible the helicopter would have reach certain volumes of new paying patients that were going to other markets. One group was skeptical as to how often the helicopter would be needed and to what extent? Would it result in new patients versus simply transporting our existing patients? The other group is best described as overzealous, discussing the ‘ripple’ effect on other referrals. So how to evaluate? The pro and con groups had to work together, challenging each other every step of the way. In doing so, they focused on the major assumptions by:

  • Establishing the diagnosis and procedures that qualify for helicopter transport and apply it to the existing patient population to set a baseline of existing patients.
  • Setting, based on county of origin and history of referral, a range of potential transports considering time, distance and weather.
  • Measuring the cost of the helicopter itself as largely fixed – 24/7 staffing, communications, ground support, maintenance, capital cost of the helicopter and the hanger.
  • Challenging estimates of payment and expense of a small group of high-cost patients. A lower contribution margin was used to adjust for the risk.

For more information –

Ken Conner

Conner Healthcare Group

Ken.conner@connerhealthcare.com

www.Connerhealthcare.com

Healthcare industry – How Good are your Financial Models?

Financial modeling in the healthcare industry is critical in evaluating capital allocation, ever-changing payments, or other factors. We use models in contract negotiations, valuations, budgeting, capital spending analysis, Certificate of Need, financing, decisions to enter or expand a market, merger/acquisition/divesture, and a host of others. Getting to the right answer is critical to making the right resource allocation. And, if we have a good objective model, we are likely to better understand the risks.

Assumptions and data inputs are constantly changing.  So, let’s acknowledge that on the day after any given forecast model is complete, it will become outdated. The model must be able to look at the upside and downside of any project. As a practical matter, an entity must pick a point in time, use the most current data, make the best assumptions, evaluate the sensitivity, and settle on a model that will require periodic updates. Once the model is ‘final’, there should be two versions, a locked down model, on which decisions were based and a second version to be used as a starting point for the next generation of models for ongoing analysis.

The size and scope of the subject analysis will determine the complexity and flexibility required. Certain models require an extra layer of scrutiny, such as

  • External models including CONs, valuations, financing, litigation, and settlements; or
  • Internal models including transaction evaluation and strategic plans.

So, what makes a good model.

  • Attention to detail – without being burdensome or issues that are immaterial. The complexity of a model can provide a greater degree of comfort and greater accuracy as it is updated. It will also make changes more difficult. Temper the use of overly complex models in smaller projects with fewer critical assumptions.
  • Good input
    • Use as much real-time data as is available rather than historically based assumptions.  Currently, this is harder to do with changes influenced by the pandemic.
    • Test the input – is there a similar service, facility or project that can act as a reasonableness test?
    • Don’t simply adjust actual data to include planned changes, as though the changes are complete. Include only confirmed changes. (See Good Assumptions below)
    • Thoroughly consider what is direct fixed cost to the project. For internal decision making, I prefer to measure the contribution margin or the contribution to the entities overhead. For example, the new housekeeper for expanded space is a direct fixed cost but the overhead of the housekeeping department is not.
  • Good Assumptions –
    • Take time to understand the payor mix and the way in which payors are treating a given group of procedures from a standpoint of coverage requirements, particularly as site neutral payments take hold.
    • Understand how the facility, service line or new technology will be staffed, the related expenses, including which of these expenses are variable versus direct fixed.
    • Understand that even fixed cost can increase if volume moves higher at an aggressive rate, perhaps in a stairstep manner – what is the incremental driver, and which fixed cost is most likely to change in the mid-term?
    • Consider pending or proposed changes as part of the assumptions going forward and not the current baseline.
    • Things never go up or down forever. Don’t get overly optimistic or pessimistic.
  • Where is the sensitivity– As you focus attention on building the model, focus on the assumptions that can make the biggest change in the model.  What is the model most sensitive to – Volume? Payor mix? Payment increases? Inflation? (I will explore sensitivity in more detail in a later post.)
  • Test the model.  Does the model work in a downward fashion (i.e., volumes) as it does in the upward movement? Depending on the nature of fixed cost, a downward movement should cause an accelerated loss, while an upward movement will show accelerated profits. Is there enough contribution margin to allow growth on incremental volume? Or does the potential exist to lose more money on higher volume?
  • Protect from Bias – Finally, and perhaps most important, protect the model from bias. Predetermining the outcome through model assumptions is a recipe for disaster. (I address this in a separate post to follow.)

Some common missteps:

While the use of averages and estimates in a forecast model are inevitable, consider the basis or validity of the averages and estimates. Here are some simple issues that I have run into overtime:

  • A benefit percentage that may not reflect the makeup of the workforce. Fifty percent or more of employee benefits have no correlation to percent of salary because they are a fixed amount per employee, for example, health insurance. Other variables include the level participation in benefits. Not all employees enroll in health insurance or retirement plans. The use of an average percentage, where lower paid employees are disproportionately higher, will understate benefit cost and vice versa.
  • Know your payor base. Too often, a model will use a percentage of charges but historically, charges have changed at a higher rate than payments, resulting in an overstatement of payments. Whenever possible, use as the base payment the actual methodology (e.g., DRG, per diem, per unit, etc.).  The actual methodology, among the largest payors, allows for changes in case mix and volume and a more precise revenue estimate, in the case of potential payor actions. Some issues to consider include changes in volume between Medicare and Medicare Advantage, movement to more narrow networks and site neutral payments.
  • Know what is changing or likely to change. Over the past 10 years, every form of Medicare payment has had some significant adjustment. Using payment data from the year immediately preceding the change may overstate the projections. Such a change recently occurred for the physician RVU.
  • Staffing – Be realistic, particularly with start-ups. Let us assume that the required nursing hours are 10 hours per patient day. We need to add paid time off or paid time outside of patient care. More significant, in a start-up, hours of care might be higher during the ramp up due to minimum staffing requirement.
  • Variable supplies – In one forecast, we saw a situation where the variable supply components were concentrated in three items making up 70 percent all variable cost. One of the items had fluctuated wildly over the past three or four months from interruptions in the supply chain with no recovery in sight. As a result, the average price for the trailing twelve months was understated.

Conclusion – Validate your data. Test the assumptions. Challenge your models.

For more information –

Ken Conner

Conner Healthcare Group

Ken.conner@connerhealthcare.com

www.Connerhealthcare.com

Accounts Receivable Values – Part II Don’t be Surprised

TESTING ACCOUNTS RECEIVABLE VALUATION
• The value of receivables are estimates. Estimates need to be reasonable and well documented with a combination of historical performance and real time insight.
• Accounts Receivable values directly correlate to Net Revenue and therefore, financial performance, valuation and budgeting.
• The first place to start is with a subsequent payment test. There is no better check on accounts receivable value than to see what was collected.
• For larger organizations, subsequent payment by financial class should be considered.
• Establish a lag report showing the month of service within in the month of collection.
• Large accounts, special collection issues and credit balances can skew value and should be reviewed separately.


First, let’s all acknowledge that the valuation of receivables is a group of estimates based on historical performance, general experience and gut instinct. Estimates can be influences by biases, revenue cycle team performance, expectations both good and bad  and unfortunately, in some cases pressure to report better results. The reporting of financial statements requires timely estimates, whether monthly or at year-end. Time is too short to really dig deep in preparing an estimate each month.  Rather, estimates and assumptions must be an ongoing process. Further, estimates are just that estimates, and they may be high or may be low. But there are consequences to bad estimates. If receivables are overstated in one year the correction will have to be absorbed in the next year’s earnings or future budgets may be based on the higher than actual revenue.
[1]In Part I of this series (click here), we looked at some of the issues leading to errors in the valuation of accounts receivable and revenue recognition. But more importantly than how the errors happen is, how do you monitor accounts receivable values in a current manner.

To address timeliness, calculations supporting estimates need to be planned, tracked over time, constantly considering changing payments from third parties, charge increases, payor mix and collection rates. The supporting calculations should be well documented. Monitoring changes of   receivables composition and variables throughout the year is critical.

Throughout my career, whether it be as a CFO, supporting a financial audit, representing a healthcare entity in a sale or providing buy-side due diligence, the first thing, I checked, was revenue recognition and accounts receivable valuation.

  • As a CFO, I needed to know the status of accounts receivable, not only for financial reporting but to address, weaknesses in the revenue cycle.
  • As an auditor, revenue recognition is the highest area of risk in a healthcare audit.
  • As an adviser to the seller, I wanted to find the overstatement before the buyer to protect my client from unexpected adjustments or find the understatement so that the client got true value.
  • In due diligence, the buyer client expects a thorough review and much like the audit, it was a high-risk area. More than any other area, receivables are a place where the buyer can impact the sales price by identifying overstated revenue.

Subsequent Payment Test

The first place to start is to understand if previous values were reasonable. There is no better check on historical accounts receivable values than to see what was ultimately collected. A subsequent payment test will compare the reported value of receivables to the actual collections, six, nine and twelve months later. Subsequent payment testing will validate methodology or expose errors in estimates and calculations. When used in transactions the testing can confirm historical revenue in a quality of earnings review and minimize post-closing disputes over net working capital.

The challenge is getting good data out of information systems. Accounting professionals looking to do the analysis need to work closely and communicate clearly with the IT group to design the right queries of payment data. Say the hospital year end wants to look back and test accounts receivable estimates from  June 30th. Then, the query needs to contain payments made for dates of service prior to July 1st but collected after June 30th. Defining payments can be tricky, depending on the complexity of the system, clarity of data fields and the strength of the report writer. Somethings that you want to consider in the query:

  • Report the collections by financial class. This will allow a focus on where the estimate may be too high or too low. This also requires consideration of patients changing financial class during the collection cycle.
  • Distinguish payment between the patient or 3rd party payor.
  • Include a review based on the last payment date, to understand the length of time from the date of service to collection and the gap between an insurance payment and a remaining patient liability.
  • The query may filter out patients who were inpatient on the cut-off date. Payments on these accounts may need to be prorated.
  • Evaluate alternative measurement of any outsourced collection service.

Once the subsequent payment testing is complete, objectively consider what was different about the actual results and the original estimates, one or more financial classes, changes related to a particular payor, declines in patient liability, higher denial rates among a payor(s), shifts between inpatient and outpatient, increases or decreases in the use of a particular service.

Monitoring in “Real Time” –

Understand that as charges are increased contractual adjustments go up and as payments increase contractuals go down. Knowing how and when to change collection percentage assumptions is critical to maintaining good estimates.

Establish a lag report as an ongoing monthly report. A lag report will track collections matching the month of collection to the month of revenue, calculating completion rates compared to the original estimate. The lag report allows the provider to develop expectations on how quickly claims are collected and how long the run-out or completion process is. For example, what is the expected and actual completion percentage at 30, 60, 90 days? The lag report can be done for receivables in total or for specific payor classes.

By updating a lag report monthly, the healthcare entity is effectively creating both a rolling look-back at subsequent payments and developing a rolling average collection rate based on the age of an account.

The lag report is especially useful when evaluating the receivables of a cash basis entity. Cash basis providers do not maintain estimates of accounts receivable value. A lag report allows the buyer to estimate receivables being acquired based on the historical time to completion.

Keep Receivables as Clean as Practical –

Resolving accounts quickly provides, not only more timely cash but, a clearer picture of what is collectible or what is not collectible. Resolving charity patients quickly is one way to get a clearer picture. All healthcare providers, for-profit or not-for-profit, can benefit from clearing out bona fide charity patients as quickly as possible. Qualified charity patients pay little or nothing, clog up the system, consume resources and inflate receivables. A strong charity program with timely resolution is not only a compliance issue for not-for-profits but will reduce the potential for misstatement from charity accounts and identify non-charity accounts that require immediate collection efforts. Timely addressing charity patients will improve reporting and impact payment based on Medicare Cost Report Schedule S-10.

The same is true of determining when an account is uncollectible and should be written-off to bad debt or transferred to a collection agency to be separately monitored.

Credit balances can understate the value of receivables but may also alter the calculation of contractual adjustments or bad debt allowances. A credit balance arises from an overpayment (almost always due back at 100%) or a misposting (almost always worth nothing). These two extremes distort the value of account receivable, net of contractuals and uncollectibles. For example, assume that a refund of $1,000 is due and included in a financial class with 50% collectability. By including the credit balance in accounts receivable, the impact on the adjustment is to understate the contractual by $500. Consider what the impact would be, if credit balances are $100,000 or more. It is strongly suggested that whatever the methodology used to estimate receivables, the estimates should be based on accounts receivables excluding any credit balances.

Adjusting to Account Specific Conditions

The subsequent payment test and the use of a lag report are the first steps to understanding the collectability of accounts receivable. But there are some accounts that by their very nature have a long collection cycle or other factors. Some areas of accounts receivable may be better evaluated on an account by account basis. Large disputed or delayed collection account skew results.  It may be appropriate to exclude or track these large accounts separately in adjustment calculation.

Consider some of the following issues as either adjustments to any estimate or requiring separate consideration:

  • Denials of coverage for medical necessity – Early monitoring of these will also allow the entity to engage the payor, the patient or the clinical service lines to reduce the issue going forward.
    • Payor practices.
    • Large accounts.
    • Type of denial.
  • Legal claims where there may be third party liability –
    • Nature of the claim.
    • Other available coverage and subjugation rights.
  • Patients paying over time
    • Consider aging from date of last payment to identify accounts not paying as promised.
    • Monitor compliance based on original balance, size of periodic payment, use of automatic draft.
  • Chapter 13 bankruptcies and expected recoveries
  • Out-of-network claims –
    • Nature of service – emergency or elective can affect the payor view of coverage.
    • Fairness of payor payment rates and success resolving with payors.
  • Unique arrangements with 3rd parties, particularly newer arrangements in the evolving use of value-based payments.

Receivables will always require monitoring and verification. Regular monitoring of receivables will identify no only changes to the contractual adjustment but issues affecting collectability.  Even the best methodology can vary due to changes in payor mix, case mix, staff turnover, contract updates, local economies and 3rd party behavior.

NOTE: This article is focused on the practical aspects of measuring collectability. It does not attempt to address issues related to new accounting standards on revenue recognition. You should consider reviewing revenue recognition standards with your auditor or other accounting professional.

For questions or help in reviewing these or other complex healthcare financial issues, contact us at ken.conner@connerhealthcare.com.

[1] Changes in accounting estimates are recorded in the period that they are corrected. Accounting errors will give rise to a restatement of a prior year. The year to year changes in estimates are not errors. Errors are and should be a rare occurrence. However, occasionally an estimate is so bad that it is an error.