I used our recent office relocation to review some files I had not visited in a while. Though an arduous undertaking, it provided some surprising rewards. Among other things, I came across one of my first major projects, done with the great firm of Peat, Marwick, Mitchell — which soon after became KPMG.
Out client was the Department of Taxation and Finance for New York State. I worked under a business economist, Don Welsch — a brilliant and fun guy, a real visionary, and a good mentor to me.
My job was to develop a model for the economy of New York State, segmented into about 150 sectors. The State wanted to have a computer model ready to go so they could change sales tax rates (read: raise taxes) in selective categories if they needed to rapidly plug a revenue gap.
I had in effect become the firm’s reigning guru in sales tax during a previous engagement, and before that had studied data analysis and linear modeling at Yale. This was a unique opportunity to fuse and leverage both cognitive streams.
It hardly needs saying that many aspects of this assignment were much different then than they would be today. It was the Digital Dark Ages! (Though of course we didn’t know this at the time. We were just trying to get our work done in the best way possible.) The Internet was still a dozen years in the future. Personal computers were just barely on the horizon, and nowhere to be seen even at well-funded firms like KPMG.
Don had a relationship with Chase Econometrics, a timesharing service that we used for time-series data by dialing in on a TI Silent 700 terminal through an acoustic coupler, into which we plugged a landline phone, as you see on the left. Results came back to us dot-matrix-printed on proprietary (and correspondingly expensive) thermal paper at a blazing 300 characters per second.
That was a quick sprint through the infotech graveyard — but, for its time, our process was state-of-the-art. The Silent 700 was one of the first portable terminals to achieve major commercial penetration, so we could work from an office (ours or those of our client), rather than have to go to a data center. Many of our consulting colleagues regarded us as futuristic space cadets.
In a sense, we were doing big data long before anyone had even found need of the term. Yet, at the same time, it was all reassuringly artisanal. We had to call first by phone to make sure the tape operator at the Chase offices in Massachusetts had the correct tape loaded on the mainframe at their end.
We also used other sources that were non-technology-based. I remember visiting the US Commerce Department in Washington often so we could make full use of their Personal Consumption Expenditures (PCE) data, which at that time were available only in a paper publication.
The model I developed contained measurements of revenues and growth rates for about 150 economic micro-sectors, including both products and services important to the New York economy. When I finished doing the research, I had a guy program the interactive portion of the model using timesharing on a mainframe — because PCs were not yet available to us. (The Apple II was around, but few of us big-firm New York guys took PCs seriously as tools for client work until the IBM PC appeared in 1982.)
Our research apparently got some traction in the world of municipal finance, and was published in a paper (“Estimating Disaggregated Sales Tax Impacts: An Empirical Investigation”) that Don and I presented to the Southern Regional Science Association convention in Knoxville, Tennessee in 1982 — my first published paper.
Our tools were primitive by today’s standards. Yet our client’s problems and opportunities, and the use case for our work product, were much the same as they are to this day. Decision makers (i.e., legislators) wanted to take actions (i.e., change in tax rates) based on empirical evidence — not just by winging it and hoping for the best outcome. They needed us to (1) develop and curate the information, and (2) make it readily accessible and usable when needed.
We succeeded in putting the NYS economy into an electronic “box” that could then be pushed and poked by decision makers to forecast and create real-world results. The knowledge we provided them became a springboard to their value-enhancing actions.
In that sense, this project from over three decades ago was essentially much the same as what I do now. It would of course be much easier (and faster and less expensive) now with PCs and the Internet.
Looking back on it, I am proud of this engagement. It was empirical, it was value-relevant, it was usable, and it was dynamic — tests for quality that I still apply to my work products.
I am extraordinarily fortunate to have been able to pursue this work that I find continually interesting, challenging, and (dare I say) useful — both to my clients and to society.