Case Study: Fifth Time Was a Charm - DecisionBoundaries

Case Study: Fifth Time Was a Charm

 

 

We recently completed an engagement in a very run-of-the-mill area: business valuation. But although we would usually refer such type of “templated” work to a firm that could do the work more efficiently, we’re glad we did it because the engagement resulted in our gaining valuable insights into other professionals’ thinking patterns.

What motivated us to take the engagement on was that our client, a law firm representing ESOP shareholders who alleged that the ESOP trustee had overpaid for the company, was in a tight spot: multiple valuations – including its own expert’s valuation – validated the purchase price.

The fact pattern was unremarkable: a leveraged ESOP (without a particularly complex capital structure) gone bad – not bad as in insolvency-bad, but rather as in the-trustee-paid-too-much-bad.

The original valuation was prepared for the ESOP trustee by a “household-name” financial advisory firm (“Firm 1”) in the broader context of fairness and solvency opinions. Additional annual valuations (three in all) were prepared for ERISA compliance purposes by a nationally recognized accounting firm (“Firm 2”). Those could be easily discounted: they basically updated Firm 1’s valuation. Yet another valuation, this one for litigation defense purposes, was performed by another well-known international financial advisor (“Firm 3”) after the complaint against the trustee was filed. The final valuation had been prepared for the ESOP-plaintiff by a well-respected regional expert who also failed to find damages (“Firm 4”)1.

So, we had in front of us six valuations prepared by four different expert firms. The conundrum was that all of those prior valuations were clustered within a [-7% – +2%] interval of the original one2. What troubled us was that we had an empirical data point (the company’s actual post-LBO performance), which suggested that the deal had not been fair and that, as a consequence, the ESOP shareholders had been harmed.

We started by probing the usual suspects: unrealistic growth assumptions, overoptimistic margins, wrong WACC, improper EBITDA add-backs, etc. That led nowhere. We too were stuck in the “valuation cluster” and, like all the others, at odds with the empirical evidence. When that approach didn’t pan out, we probed the assumptions-behind-the-assumptions by reconstructing expectations from contemporaneous derivatives of forecasts of macro variables relevant to the industry in question. That step moved the needle out of the cluster but not enough: it returned a fair value which was 12% below the median of our six valuations but still did not reconcile to the company’s ability to generate cash. Both those approaches turned out to be dead ends.

We then decided that our only shot at reconciling to reality was to resort to first-principles-thinking. We put ourselves in the shoes of Firm 1 and relied on the same information that was available to it at the time. In the interest of austerity, the only shortcut we took was to use its same projections, WACC, and EBITDA add-backs, since we had already broadly validated them.

Bingo! We came up with a $400 mm valuation versus the $1.2 bb normalized median3, and $400 mm was very close to what one would expect the company to have been fairly worth at the time of the leveraged ESOP, given its ability to generate cash.

So now we had two coinciding data points pitted against the cluster: 1) the company’s performance and 2) our from-scratch valuation.

By that time, we were positive that our figure was right, and the cluster was wrong; the next question was what went so wrong with six valuations prepared by four different firms? The answer was immediately apparent to us: it was due to a misapplication of finance theory compounded by a lack of Bayesian updating, which led to (to borrow an engineering term) homeostasis. 

Firm 1 had made a conceptual mistake and all subsequent valuations (including our first one) had blindly followed the same fallacy, challenging variables but not the theory that weaved them together. The mistake was so obvious that I feel embarrassed that we didn’t catch it at first sight. Firm 1 had applied the Modigliani-Miller Theorem which states that, in a world with no taxes, the value of the unlevered firm is the same as the value of the levered firm. Firm 1’s error was that they applied Modigliani-Miller to the firm’s equity value, not its enterprise value. The other three firms (and, initially, we) all had fallen for this fallacy like dominoes.

One culprit was the psychological phenomenon known as conformity, which is the act of matching one’s attitudes, beliefs, and behaviors to those of others4. In other words, people often choose others’ beliefs because it is easier to follow a well-trodden path than to blaze a new one, particularly if our predecessors enjoy high status.

In my view, although Firm 1 was the epicenter of the cascading mistake, it was the least afflicted with conformity. They had only one exogenous data point: the value at which the deal was struck. In fact, Firm 1’s valuation was slightly above what the ESOP paid for the company. Their conceptual mistake (a Hanlon’s Razor mistake) coincidentally returned a valuation broadly in line with the terms of the deal. They just used it as external validation of their own (sophistic but good faith) calculation.5

But, fortunately, the story has a happy ending; Firm 4 amended their report to reflect our finding and announced its willingness to testify to its integrity. That resulted in mooting the question of whether the shareholders had been injured and, if so, by how much (they had, and the harm amounted to an eye-popping $800 mm, or almost 70% of the deal price). 

The only issue still left to be resolved is that of the trustee’s reasonable reliance.

Lesson learned? I’ll just let Charlie Munger express it in his inimitable style: “If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models.”


1 One interesting aspect of the engagement, and one of our motivations to take it, was that a comparables analysis was not feasible: there were no publicly traded companies anywhere near the valuation target’s SIC or NAICS codes. Only the original valuation used a comp analysis but the “comps“ were portfolios of stocks of companies in unrelated industries which were supposed to replicate the economics of the valuation subject. There “theory” behind the replication portfolios was inane and no comps were used in any of the subsequent valuations.
2 Except for the ERISA valuations, all of the other valuations were prepared as of the closing date of the ESOP. We normalized the contemporaneous ERISA valuations to render the [-7% – +2%] interval meaningful.
3 We have affected all figures by a ratio in order to conceal the identity of the company, since the litigation is ongoing.
4 “Social influence: Compliance and conformity”, Cialdini and Goldstein (2004).
5 This conclusion is true with respect to Firm 1’s DCF valuation. On the other hand, Firm 1’s fallacious construct to validate their DCF with an unsubstantiated theory of comparables can be held to have been an effort to conform to their own valuation in order to avoid cognitive dissonance.


 

 

Subscribe to Blog

Leave a Reply

Your email address will not be published. Required fields are marked *