Competitiveness and Innovation, Knowledge Strategy, Security and Privacy
If you’re in a position of authority and/or responsibility over digital technologies in an organization, there are several things worth considering before adopting generative AI (genAI) at scale. In this executive-level flyover, I’ll steer clear of the technical weeds — but would be happy to refer you to my sources — mostly academic studies — upon inquiry. I’ll stick to ‘just the facts’ in the numbered sections — my opinions follow.
1. We’ve been using AI commercially for decades — neural networks have been used in fraud detection since the early 1990s, and machine vision systems have been commercially available since the 1980s.
2. genAI is new — based on the ‘transformer’ model invented at Google in 2017 — and is one of several categories of AI. OpenAI’s ChatGPT, introduced in November 2022, was the breakout success that brought AI into mainstream attention.
3. genAI models are built on text (‘language models’), graphics, audio, video — or some combination.
4. Tens of thousands of genAI producers/vendors offer competing products — most of them structured as profit-making businesses (including OpenAI, Google, Microsoft, Meta, and others)— though genAI itself as a product is not reliably profitable.
5. Narratives around AI often conflate genAI with AI in general, and even with data analytics and digital transformation in general. It’s critical to specify which of these we’re talking about.
6. genAI accesses and ‘trains on’ huge amounts of data, finds and analyzes patterns in it, and generates answers based on user queries and prompts. Text sources include news outlets, other publications, Wikipedia, social media conversations, audio transcriptions, etc.
7. Many genAI models train on non-licensed, unfiltered data ingested electronically from the internet without the knowledge of, or permission from, its creators/owners. To be fair, some datasets contain data licensed from its owners or in the public domain.
8. genAI models are expensive to build, run, and maintain — and produce a range of adverse environmental and human capital impacts including massive consumption of power, water, and low-cost human labor.
9. genAI producers/vendors are notoriously opaque about their sources and methods, claiming that these deserve protection as competitive secrets.
10. genAI products are frequently updated or superseded. Successive updates have added progressively less functionality than previous ones. Some say genAI capabilities are ‘plateau-ing’ over time — or even degrading due to the feeding back of AI-generated data into our ‘epistemic ecosystem.’
11. The primary use cases for text-centric genAI currently include generating simple computer code; generating non-critical copy, like promotional notices and ideation; and creating customized customer experience chatbots. There are also text-to-graphics generators, music generators, speech generators, video generators — any type of digital content.
12. Claims are made for a range of future applications, many of which are in the proof-of-concept or testing stage. Much of the talk is full of promises, hopes, and prayers — but, as yet, light on proven ROI-positive applications.
13. An OECD white paper (November 2024) identifies 21 potential benefits that support its five AI trustworthiness principles: (1) inclusive growth, sustainable development, and well being; (2) respect for the rule of law, human rights, and democratic values, including fairness and privacy; (3) transparency and explainability; (4) robustness, security, and safety; and (5) accountability.
13. The OECD paper describes 38 potential AI risks, including cybersecurity attacks, disinformation, ethical shortcomings, and lack of alignment with human goals and values. A team led by MIT have built a living database of existing studies of AI risks, currently numbering more than 700.
14. Though it can be trained to mimic a person, genAI cannot think, reason, anticipate the unexpected, or distinguish truth from falsehood.
15. genAI typically does not reliably yield the same result for any given prompt or question when repeated.
16. genAI often fabricates content entirely (‘hallucinations’), rather than say it doesn’t know something.
17. genAI is an integrator and homogenizer for internet information — it can produce an amalgamated mash-up, but few individual citations. As a result, it’s difficult to identify and vet sources — and therefore to fact-check and verify its output.
18. genAI in many situations amplifies cybersecurity risk — for example, from deepfake impersonations, social engineering, and untrained individuals entering confidential content as prompts. Amazon attributes a recent seven-fold increase in cyberthreat incidents largely to AI.
19. genAI has now been the subject of a two-year-old publicity wave that has dominated the business and technology news and produced stock valuations far in excess of what would be expected using industry-standard forecasting and projections.
20. At the consumer level, genAI is currently being deployed (by Google, Apple, and Microsoft, for example) in the form of summarized internet searches and enhanced interactivity.
21. At the enterprise level, production deployment of genAI is slower than typically reported by its producers/vendors. Deloitte recently reported (“Governance of AI,” June 2024) that in only 4% of organizations surveyed is AI incorporated throughout their operating plan for the coming 12 months. One of the main barriers is the lack of data-readiness at the enterprise level, resulting in increased complexities and costs of implementation.
22. The ROI for genAI is largely unproven — even for its producers/vendors, many of whom are venture capital-backed and operate at negative cash flow.
23. The reliability, safety, and viability of genAI have been questioned in dozens of rigorous studies — academic and industry (including those by producers/vendors).
24. Productivity impacts so far have been modest — drafting code, generating low-stakes content, managing the customer experience. A recent Goldman Sachs research note cited a forecast by Nobel laureate Daren Acemoglu of MIT that the total factor productivity impact of AI over the next decade would be less than 1% — far below that claimed by producers/vendors.
25. There is lots of IP-related litigation coming from the content creators/publishers (of text, graphics, and music) who provide the data that train and populate AI models. IP rights are central to our ‘knowledge economy’ and are enshrined in the US Constitution — and experts say it could take as long as a decade to resolve all this
26. AI regulation is spotty and uncoordinated globally — Europe is the most aggressive, with the EU AI Act; the US is currently unregulated, though some US states (like California) have legislation pending; China has its own state-controlled approach.
27. Many AI producer/vendors have reduced funding for their internal trust and safety initiatives.
28. User organizations are essentially left to safeguard themselves. However, few organizations have a comprehensive code of digital conduct in place — though many say they are planning to develop one.
29. Few organizations have executive- or board-level expertise in genAI and other advanced technologies (Deloitte: “Nearly 80% of respondents say their boards have limited to no knowledge or experience with AI”).
30. Only 14% of organizations report having AI as a standing agenda item for board meetings (Deloitte).
31. As a general rule, ‘liability follows agency’ — that is, the user of an AI tool bears the primary legal liability should anything go wrong — though this has only begun to be tested in the courts.
Lest I sound too pessimistic, these ‘moments of truth’ have not dimmed my belief in the bright promise of a technology-informed society. I continue to wish for:
These are among the promises of a technology-empowered democracy. Omniscience, fairness, and consistency are the pillars of that promise — and all desirable for a sustainable society.
STEM runs deep in both sides of my family, like DNA. I trained (and worked) first as a research scientist and later in management school under the rubric of scientific or ‘evidence-based’ management. I’ve always been a technology proponent, early adopter, and professional user. I worked in predictive analytics at KPMG when it was just getting started in the early 1980s. A decade later, I was writing published articles and books on the competitive advantages of technology — before the internet was available for commercial use. I’ve developed frameworks and tools for applying basic principles of management and economics to data, information, knowledge, and intelligence — especially as relates to their growing role in our present and future economy. I continue to enjoy personally the products and financial performance of several technology companies.
Given my track record of tech optimism, my instinctive skepticism toward genAI technology surprised me. It was my history as a content creator in both business strategy and music that initially tripped my alarms; genAI’s casual treatment of intellectual property (IP) rights goes against my grain, both personally and professionally. And having worked in the analytics ‘sausage factory’ myself, I tend to ask challenging questions.
But I resolved to keep an open mind and to study it intensively. I even went back to Yale to take advantage of a semester-long workshop under Prof. Luciano Floridi, the brilliant founder of the ‘philosophy of information’ and a pioneering thinker on digital ethics.
The more I read, the more I discussed, the more I engaged with new people — the more I suspected that genAI — as bedazzlingly clever as it may appear — might be a red herring — a wrong turn down a rabbit hole and away from more useful lines of inquiry — and built on shaky foundations, both intellectual and ethical. And because it was amusing and newsworthy — going viral globally in record time and attracting massive investments of money and attention — I sensed that people were rushing into it uncritically, without knowing why (other than sensing a raw fear of missing out). My ’empirical angel’ insisted that I share publicly my ongoing misgivings.
genAI does some things well (like generating ‘humanesque’ content based on user prompts) and some things poorly (like reasoning and giving correct answers to fact-based problems). It’s critical to maintain a clear, balanced, curious, and fact-based view of both its capabilities and its limitations — and to keep this perspective updated as new developments emerge.
The preceding does not constitute my professional advice for your organization. To that end, I’d welcome a confidential conversation around risk, responsibility, ROI, and codes of conduct governing genAI and other digital technologies.
These are my own observations, based on non-confidential information. They do not necessarily reflect those of The Conference Board, which I am proud to serve as a Senior Fellow. I’ve numbered them so that, should you wish to respond, you’ll be able to reference them. The photo is not intended to depict the article — but more in gratitude for the Hudson River, whose east bank I walk each day — and which for me is a continual source of strength and inspiration. Though, on second thought, the guardrail symbolizes the safety and trust it provides to all who enjoy the river.
Comments RSS Feed