Deeper Transparency: The Future of XBRL

By Agnes Grunfeld

Keynote address at the XBRL and Financial Analysis Conference co-sponsored by The New York Society of Security Analysts (NYSSA) and XBRL US

Introduction

Thank you for the introduction, Campbell.

I feel a lot of pressure speaking here today. This is a historic moment in the development and adoption of XBRL. We’re past the fourth anniversary of the SEC mandate. Limited liability is expiring this year. XBRL is becoming what it was meant to be: the lingua franca of business reporting.

The adoption of XBRL involves and affects institutions across the spectrum – issuers, investors, regulators and many others. The stakes are high. If XBRL lives up to its potential, it will move modern capital markets toward a more sustainable foundation…what I would call deeper transparency, which is characterized not only by visibility, but also accuracy, accessibility and reusability.

But the success of this standard — and interactive data in general — is not yet a foregone conclusion. We continue to see surveys and media reports about challenges related to data accuracy and staff training. There’s no doubt that these are serious risks that could jeopardize the desired outcomes of XBRL implementation.

Demand for XBRL

Still, in my view, the need for deeper transparency is undeniable. Look at the explosive growth of Big Data and the new layers of complexity that the typical investor confronts in the analysis of issuer risk. Look at the volume of data “trapped” (so to speak) in pre-XBRL formats.

Despite the clear need for innovation, we’ve seen signs of strong resistance as well. I don’t want to trivialize the resistance to using XBRL data that is prone to errors. But I think it’s only fair to consider XBRL’s strengths and weaknesses in relation to what it is replacing.

I am not an XBRL technologist. But I spent most of my career structuring, interpreting and curating large volumes of highly varied data. Having done this work for more than 30 years, I have seen the effects of several generations of technological innovations on data analysis.

When I started my career in the Equity Research Department of Goldman Sachs in the 1970s, I copied figures from quarterly and annual reports onto green-lined accounting pads for all the companies followed by the Food & Beverage analyst.  Over time, he taught me how to make adjustments to the data to put them on the same footing when companies chose to report things in different ways, how to deal with restated data, and how to calculate key ratios and change statistics.

Later, I had my own assistant keying data into Excel spreadsheets for me and setting up formulas. As we got more sophisticated, we started setting up macros to handle repetitive tasks.  Later still, we would download information directly into Excel instead of keying it in by hand.  You get the picture.

Data collection was time-consuming and highly prone to error.  It was frequently assigned to young, inexperienced people.  As we moved to the spreadsheet stage, some of the manual tasks got automated and improved, but other pitfalls were substituted. For example, once an error becomes embedded in a spreadsheet it has the potential to become much larger.

Reading some of the criticisms focused on XBRL data accuracy, one might conclude that XBRL improves nothing and only introduces a gratuitous complication into a reporting ecosystem that does not need further complications.  In reality, it was the limitations of the outdated system of data dissemination that motivated the regulatory mandate for the mainstream adoption of XBRL.

In 2006, former SEC Chairman Chris Cox joked about XBRL’s PR problem: “If you think I enjoy talking about something called XBRL taxonomies, you don’t appreciate what I learned as a member of Congress a long time ago. People just don’t want to hear about anything that starts with the word ‘tax’”.  To some extent that PR problem persists today – as soon as you utter words like “taxonomy” or “extensible”, some people just shut down.

On a more serious note, Chairman Cox pointed out that anyone working in financial analysis would prefer to focus on analysis, not the brute labor of compiling data. He went on to say that the SEC’s strong interest in interactive data is a natural outgrowth of the agency’s main mission: to protect individual investors.

Here are Chairman Cox’s own words:

  • “Markets function best when all the information market participants need is available to them when they want it and in the form they can use it.”
  • “Obtaining and crunching financial information more easily will strengthen our ability to police wrongdoers and prevent fraud. But that’s not the whole story by a long shot. The real basis of our interest in interactive data at the SEC is our fundamental mission: to protect investors.”
  • “Imagine your work in the world of interactive financial data. No more re-keying of information. Even if currently you are relying on the back office or outsourcing to India…, you are still victimized by the huge error rate built into the task of manually re-entering financial information from SEC reports. You may not know this, but, even the automated data tools currently used to parse the data in SEC filings can have an error rate of 28%. And that already unacceptably high level of mistakes from unreliable data rises for those of you who dig deeper into footnotes to seek information on pensions, stock options or leases. It’s a hell of a way to run a capital market.

Overall, Chairman Cox’s 2006 speech on XBRL still rings true today. If anyone is interested, the speech is available on Youtube. But the conclusion is clear: while we have to take inaccuracies and errors very seriously and work to minimize them, we shouldn’t lose sight of the fact that the alternatives methods for data collection and analysis are all prone to error as well.

That’s why I have mixed feelings about the headlines we’ve been seeing this year about the chilling effect of poor data quality on the market sentiment about XBRL. Foundational innovations rarely enjoy a steep and steady trajectory to market acceptance. We can acknowledge the technical challenges and other uncertainties for XBRL stakeholders. But we should weigh these challenges against the risks inherent in the earlier methods of reporting that XBRL is displacing.

There’s also reason to believe that the resistance to XBRL adoption is not only related to technological challenges and the overuse of extensions by filers. The resistance may also be a failure of will and thoughtful execution. For example, FFIEC has been requiring banks to submit their quarterly call reports using XBRL for years now. In this area, we haven’t seen the same complaints as in response to SEC requirements.  The implementation was different. Maybe we can learn from that. 

How market participants use XBRL

I was glad to see that much of this event will focus on how XBRL stakeholders are actually thinking about and applying the XBRL standard to their day-to-day work.

In my work at GMI Ratings, the main goal is to give investors an easier way to incorporate into their analysis certain variables and measures of issuer risk that current reporting standards do not fully reflect. We use regulatory filings and other sources to extract data that can reveal investment risks stemming from environmental, social, governance and accounting practices of approximately 20,000 corporate issuers worldwide. To this data, we apply our taxonomies and algorithms, mainly to help investors distinguish material risks from marginal concerns, and then we assign ratings to the companies.

In addition to non-traditional data elements, our research platform also includes basic widely used financial data as well as event alerts based on corporate actions such as M&A, divestitures, financings, executive appointments, and other events likely to alter a company’s prospects.  Many of these items are found on the face of the income statements, balance sheets, and cash flow statements or in footnotes – which are areas the SEC mandate for XBRL tagging covers.  But some of the most interesting elements are in the MD&A, in proxy filings, in 8-K’s, in press releases, or in news stories not authored by the corporation.  Capturing those elements can be very hit-or-miss.  I would love to see XBRL or some other structured data approach applied much more broadly to themes that are not strictly financial but are nevertheless key to a comprehensive assessment of risks.

Some examples:  There is a movement towards “Integrated Reporting” to include ESG reporting alongside Financial Reporting, the SASB is developing industry-specific standards around what should be reported for “sustainability” issues.  In the Corporate Action arena, there is also work being done to create a taxonomy for reporting distinct corporate actions.  All of these are important developments that will flesh out what is available beyond the data the SEC has mandated be included in 10-Q and 10-K filings.  As this process moves ahead, it is vital for all parties to agree on the terms and definitions that govern reporting applications.

Capital market participants reach many important decisions through judgments about relative value and relative risk, about the distribution of issuer characteristics across peer groups, industries, asset classes, market cap ranges and many other criteria. This is a multi-dimensional undertaking requiring multi-dimensional models for organizing information. Paper-based filings are simply not suited for this task.   XBRL and other structured-data approaches provide the flexibility to slice and dice and generally reformat information to fit your preference as a consumer of the data.  It may be that other structured data approaches will ultimately be the better way to go for some of the data.  But there is no question in my mind that structured data is the only sensible path for the future.

From my experience with data, I believe that having the authors of information structure or tag it at the earliest possible point in their systems is the most effective approach, especially as it relates to avoiding interpretation risk.  In practice, this may involve having a business information system vendor doing the tagging rather than the corporate filer tagging when they close their books.   In a white paper released this December by Trevor Harris and Suzanne Morsfield of Columbia Business School, there is a suggestion that partnering with such vendors and perhaps major data aggregators might be one path to improving the XBRL technology.  From my understanding, this is what the FDIC did back in 2005 to use XBRL in the Call Reports filed by banks. They worked with 5 software companies to embed the FDIC taxonomy into the software the banks were using to provide Call Reports and had a very successful outcome.

As an analyst and risk modeler, I’ll share just one key conclusion from the academic research on the heavy cost of mispriced risk.

  • Risk modeling generally diminishes in efficacy when it excludes or obscures important variables. If you’re studying a large and complex entity (such as a modern corporation) materially affected by a very large number of variables, your understanding of investment risk suffers when a meaningful measure of each variable is hard to access and incorporate into a sensible taxonomy.

That’s why, it makes sense that the SEC remains committed to facilitating wider deployment of interactive data standards. The agency itself is using interactive data more extensively over time. The application of XBRL to fraud detection is particularly interesting. Fraud detection is one of the things we do at GMI Ratings, so I was intrigued when I read in February this year that the agency deployed a computerized tool nicknamed “RoboCop” designed to automatically trigger alerts based on quantifiable markers of aggressive accounting. RoboCop relies heavily on XBRL tags to detect anomalies that jut out from the normative pattern in the data set.

For regulators, the investment in XBRL is simply essential. We continue to see reports that financial regulators find themselves deeply challenged by data overload. Here’s a quote from a keynote address by CFTC Commissioner Scott D. O’Malia to the SIFMA Compliance and Legal Society Annual Seminar.

  • “Solving our data dilemma must be our priority and we must focus our attention to both better protect the data we have collected and develop a strategy to understand it. Until such time, nobody should be under the illusion that promulgation of the reporting rules will enhance the Commission’s surveillance capabilities.

XBRL should be applied to more than financial data

Several experts in the field have compared XBRL to other supply chain standards such as the bar code. I think that’s an instructive comparison. But it’s also helpful to think of XBRL as a language. That’s what it’s called. That’s what it is.

So, we can think of the failures of XBRL to date as failures to communicate, failures to find a common language and a clear syntax.  All languages are collective creations governed by shared rules, and no one seriously questions the need for a common language. XBRL stakeholders mainly need more practice to acquire greater fluency in this language.

Thinking of XBRL as a language also helps frame the point I’d like to emphasize the most in my comments.  Getting the market to learn the new language is only the beginning. The next urgent challenge is to compel all market participants to use the language for consistently candid communications, not just mandated disclosures.

From my vantage point, the format in which investors access corporate filings is only part of the problem. The other important aspect of the problem is that the content of corporate filings still overlooks many important dimensions of corporate performance and prospects. If we only solve the technical/execution issues detailed in the Columbia report, that won’t be enough, in my view.

Let’s not forget that XBRL was always meant as a means to an end, not an end in itself. The community of XBRL stakeholders represented here is focused on finding ways to fix the problems currently impeding broader adoption of XBRL. This is certainly a step in the right direction, but it’s only the first step. I believe that data published in structured, machine-readable format needs to be of broader economic interest – not just finance or accounting per se.

Beyond improvements in financial reporting through regulatory filings, XBRL can improve transparency across the information supply chain. At least for this audience, I doubt I need to elaborate on the foundational importance of transparency in the modern economic system.

So, to conclude, I’d like to thank you again, Campbell and Michelle, for inviting me to speak here. And thank you all for your interest. I’m happy to take questions.