For all its merits, AI has a darker side: C3.ai’s Thomas Siebel
In an ever-evolving technology landscape, 70-year-old Thomas Siebel remains an unyielding optimist about the disruptive potential of artificial intelligence (AI). Despite seeing C3.ai's share price plummet from a high of $177 per share in 2020 to a current low of $31—far beneath its IPO price of $42—Siebel remains unfazed. For him, such wild market swings are merely short-term distractions in a sector notorious for its unpredictability. Siebel, who founded C3.ai in 2009 after selling his previous enterprise, Siebel Systems, to Oracle for $6 billion, argues that market valuations are not the ultimate arbiter of long-term success. Siebel’s conviction is rooted in the transformative power of AI, which aims to reshape the business world much like the internet did years ago. Amid a growing focus on generative AI, C3.ai has bagged lucrative Federal Government contracts, from the likes of the US Department of Defence, the US Airforce, the US Marine Corps, and the Defence Counterintelligence Security Agency. In a free-wheeling interaction with Fortune India, Siebel is confident that the $3.7 billion m-cap company will not be a marginal contender as he believes AI will not be a winner-takes-it-all affair.
Is AI in a hype cycle -- purposed to attract funding and sell products and services?
Many people argued that the internet was overhyped during the dot-com bubble (1995 to 2000), with stocks such as Amazon trading at excessive valuations. While a market correction did occur—which I believe was appropriate, given that valuations were largely inconsistent with the real value of these companies—the reality is that the internet today is thousands of times larger than anyone predicted it would be in 2000. Currently, there's a lot of hype around the valuations of companies involved in AI and cloud computing. Will these valuations correct themselves? Unless you're a day trader, it might not matter to you. Personally, I'm not a day trader, so I don't care! But I think this “V” is a big one. I study the history of science a lot. I study the history of science extensively. In my opinion, the impact of generative AI could be as transformative as the invention of the movable type of press or the advent of the steam engine, both of which revolutionised their respective eras. Information technology business in the 80s was a $200 billion business worldwide. Since then, technology has changed. It's changed everything about the way we communicate, the way we entertain ourselves, the way we work, the way we recreate, the way we perform medicine, the way we manufacture products. India is putting a ship on the moon today! In short, IT is now a $9 trillion dollar business. So, it’s difficult to overestimate how big AI will be - because it changes everything.
Wasn’t IBM the very first one to walk down the AI path with Watson?
IBM Watson was a fraud – plain and simple. It never worked. It never did AI. It went to town with the game that it played called Jeopardy. All it did was to memorise Wikipedia. They just sold a bill of goods that didn't work. So, I don't think that's really a reflection on AI, but more a reflection of a company that perpetrated fraud on the world. Okay, they knew what they were doing.
So, is it “different” this time around?
The way we engage in warfare, in health care, the way we manage society, the way that we communicate with one another. By the way, it's not all good. A lot of the things that come out of this are terrifying – for instance, totalitarian states that want to control their populations. As far as I'm concerned, every state is a totalitarian state that wants to control its population. That's the way governments are. These things that today we see such as the social compliance scoring in China. We're going to see that all over the world, even in the United States. This is scary. You can forget about privacy, buddy. Privacy is over. It is going to be impossible to distinguish between fake news and real news. I'm not a big believer in artificial general intelligence, and I don't think that it's going to be anytime soon that we have computers that are more intelligent than human beings. But it will be impossible to distinguish between a human being and a computer. For instance, when I am talking to you, I wouldn’t know whether it's a virtual you or a real you. We are past the Turing test [measure of a computer's ability to demonstrate human-like intelligence]. We're beyond that.
What in your opinion will be the “good” that will come out of AI?
A lot of the outcomes are highly promising. For example, we'll be able to deliver safer, cleaner, and more reliable energy with a lower environmental impact. We'll provide healthcare at a reduced cost to underserved populations and offer genome-specific treatments. However, there's a darker side to consider. AI will also play a central role in military "kill chains," making decisions without human intervention. While we're using AI to improve healthcare in countries like India, China, the UK, and the US, those very same systems could be used to ration healthcare. And it's debatable which is worse: a single care provider system or a quasi-private enterprise system like we have in the US. In either case, economic decisions will determine whether you or I can undergo surgery, or we might simply be placed in a queue for an extended period, as is sometimes the case in the UK. This isn't an outright denial of service, but putting someone in a line that lasts "forever" achieves the same result. So, I don’t think AI is mere hype; it's real, it’s legit. The underlying technologies are here to stay, and they will bring both significant benefits and a lot of evil.
Will the flaws override the advantages that AI brings to the table for corporations or the government?
It depends on how government entities and society at large deal with it. In free enterprise economies such as the U.S. and India, we'd like to believe that corporations will self-regulate, but often they don't—just look at Facebook for evidence. If we consider other transformative technologies, you and I have discussed, the Gutenberg Press serves as a prime example. While it was revolutionary for publishing books in multiple languages and democratizing access to religious texts, it also had its downsides. Figures like Calvin and Martin Luther began distributing Bibles, which led to a tumultuous period in Europe marked by religious wars for more than 100 years. The upheaval only subsided with the Treaty of Westphalia in the 17th century. Similarly, the steam engine fuelled the Industrial Revolution, bringing about longer lifespans, better health, and economic growth. However, it also gave rise to "Dickensian" Europe, labour laws, and Marxism. One could even argue that the steam engine indirectly contributed to World War I and World War II. So, both the Gutenberg Press and the steam engine had positive and negative impacts. Will the positives outweigh the negatives? I'd like to think so, being an optimist. But we must also consider the potential for things to go awry. The dystopian world depicted in George Orwell's "1984" isn't too hard to imagine. You can see a future not too far away that makes the Orwellian 1984 look like the Garden of Eden! Things could go bad.
Can there be one single poster boy of AI that you see, or will it be as democratised as IT or the internet?
Every time we've seen a significant technological breakthrough in the information technology sector, the expected winner didn't necessarily come out on top. Who was the frontrunner in the PC market? IBM! Yet, does IBM even manufacture PCs today? The biggest computer maker of that era, Digital Equipment Corporation, went out of business largely because they chose to focus on outdated technology like the eight-inch floppy disk. And how did IBM miss the cloud? A company that started by selling books—Amazon—has become the largest cloud service provider through AWS. They expanded from books to T-shirts and socks, and now they dominate the cloud market! And consider Facebook, which originated in a Harvard dorm room. Given all these unexpected outcomes, it's hard to predict where the AI leaders will emerge from. One might assume Google or Facebook would take the lead, but these giants often get trapped in their own business models, a situation often referred to as the "innovator's dilemma," and risk becoming irrelevant. So, I suspect we have no idea who the winners are going to be.
What about Microsoft? It’s betting big with OpenAI?
The guys at Microsoft are very smart. I think they're incredibly impressive. Their execution has been flawless. They're an impressive company that has reinvented itself and it's fun to watch. So, I would not count them out.
What parallels and contrasts do you see between the early days of CRM and the current state of enterprise AI?
The obvious winners in CRM would have been Accenture, Oracle, and SAP, they didn't either play any role at all. And CRM Siebel systems came out of nowhere. We had an 85% market share worldwide. The company was sold to Oracle. But Oracle dropped the ball as Salesforce came out of nowhere. When we sold Siebel to Oracle in 2006, we were doing about a billion and a half in revenue and Salesforce was doing about $150 million. They didn't even count. Today, they're doing $60 billion a year and probably have an 80% market share for us. How is that possible? Siebel Systems was a nobody. But we took on SAP, Oracle, and Accenture and had an 85% market share. So, I don’t think we know who the winner is going to be and surely it is not going to be the obvious.
If it's free for all then how do you play the game, especially in enterprise AI?
Well, we started investing in this business 15 years ago, and we believed that there would be a large market for enterprise AI. Back then the internet was this big, not the cloud. And the big data was just an idea in people's minds. The Internet of things didn't exist. So, we've done about 15 years of work, maybe a couple of billion dollars’ worth of research and development, building a software stack that allows companies to build large enterprise AI applications. We've built 42 turnkey AI applications for oil and gas and utilities and banking and defence and intelligence and telecommunications and aerospace. As we get into 2023, everybody believes that this AI market is bigger than a bread box. It is bigger than a bread box. And it is generally believed that this is the largest application software market in the history of enterprise application software. So, C3.ai is well positioned. We're a likely candidate to be the leader in this space, but it all boils down to execution? The distribution channels for AI are something that we do not know yet. The pricing models are something that we do not know yet. So, will we be at the cusp of innovation, or will we be made irrelevant by somebody who innovates us out of business? All boils down to execution.
You had a sensational listing with c3.ai but the stock has been through a roller-coaster?
Going public [Dec 2020] was one of the smartest things we've ever done. We raised $2 billion and even today I have a cash balance of $800 million. We raised the money we needed. We priced the stock at $42 and it ran up to $185. What the hell was it doing at 185? It was beyond me! But my job is to run a company and not to play the equity market. In my opinion, the market got a little carried away and since then, it has corrected rightfully. We are an ongoing concern. We can serve our customers. We can invest in our technology. It was the smartest thing we ever did. We're in a position now to grow our business in Asia, in Europe, in North and South America, in manufacturing and aerospace. We have the capital that we need to establish a leadership position in the marketplace. While the likes of Salesforce, Amazon, Google, Facebook, and Microsoft are laying off, we're hiring in Europe, Asia, and North America. We're growing the business.
Coming to predictive analytics, how will AI enable accurate sales forecasting and lead prioritisation for companies?
CRM, traditionally, is a system of record where we store all the information about customers and products. Most of the information in those systems is not very good because it's been entered by salespeople who have put in whatever information they have to get the sales manager off their backs. Hence, forecast error in sales in CRM systems is massive. Consider Boeing, which likely uses a CRM system from a provider like Salesforce. To maximize the potential of this CRM system through AI, one can integrate various data points that are crucial to the aerospace industry. This includes factors such as fuel prices, COVID trends, unemployment rates, GDP growth, and travel policies. Additionally, data about Boeing's key clients—airlines like Southwest, Emirates, and Lufthansa—can be incorporated. Stock prices, equity figures, social media sentiment, analyst reports, and annual reports all add layers of richness to the data set. By synthesising all of this information into a unified, federated data image that integrates with the existing CRM system, machine learning models can be developed that offer highly accurate predictions. These models can inform Boeing with a high level of precision who is likely to make a purchase in the next three, six, or nine months. They can also provide insights into the next best offer or product to propose. Although companies like Salesforce propose to enhance their CRM systems with AI capabilities, the reality is that the data within these systems alone are often insufficient and of poor quality for making accurate predictions. What's needed is an amalgamation of internal CRM data with external market data. When combined, these rich data sets can be used to develop robust machine learning models for sales forecasting and lead prioritisation. This approach to combining AI with CRM has the potential to revolutionise industries like commercial aviation and fundamentally change the nature of CRM itself.
Can AI help companies achieve deeper personalisation with their customers without increasing operational complexity?
The advent of generative AI is poised to dramatically simplify the often-complex landscape of enterprise information systems. Let's go back in time: We started building these systems—manufacturing, ERP, supply chain, HR, CRM—in the early '80s. Fast-forward nearly half a century, and the user interfaces for these systems are, quite frankly, dreadful. Take SAP or Siebel version 23 as examples; they're virtually unusable, laden with confusing tabs, dropdown menus, and arcane control sequences that only specialists can navigate. I believe the "killer application" for generative AI lies in enterprise search. Imagine integrating the data corpus from various departments—ERP, HR, CRM, manufacturing, supply chain, customer service, vendor management—into a single, unified training set for a specialised language model. This wouldn't necessarily need to be a massive language model; even a small one could do the job. Currently, when an executive wants an overview of business operations, they must query multiple systems and interact with several dashboards. That's cumbersome and inefficient. But with generative AI, the future could be as simple as a single search bar interface, akin to a Google or Mosaic browser. You pose a question like, "What is my most profitable product line?" or "How am I doing against my diversity goals in a particular city?" and receive an immediate answer. This AI-powered search would sift through all relevant enterprise systems to provide a comprehensive response. So, rather than complicating the user experience, AI has the potential to vastly simplify it, making these enterprise systems more accessible and easier to manage.
What will be the largest constraint?
The most significant obstacle to people fully embracing these advanced systems is likely to be change management. People are generally wary of AI, and for good reasons. Many AI systems are "black boxes" that provide results without clear explanations, fuelling concerns about transparency. Additionally, there's the fear that AI could threaten jobs and disrupt traditional workflows. People are often resistant to change, especially when they've been doing their jobs a certain way for a long time. This resistance isn't confined to any single industry; it's a universal issue whether we're talking about healthcare, aircraft maintenance, or managing power grid infrastructure in places like India. Overcoming this resistance to encourage the adoption of these systems will be a significant challenge. I've tried to make the text more straightforward and logically structured, while keeping your main points intact. Let me know if you find this version helpful.
How real is the threat of job losses?
When we examine past technological revolutions, it's clear they led to job displacement. The invention of movable type made manuscript-writing monks redundant, and the advent of production lines eliminated the need for teamsters. However, it's essential to note that these technologies also created new jobs on a much larger scale. For every monk who lost a job transcribing Bibles, thousands of jobs were created in document preparation, eventually on platforms like Microsoft Word. Similarly, while teamsters may have seen a decline, thousands of truck-driving jobs emerged. Will AI replace some jobs? Absolutely. But the concern shouldn't be solely about job loss. The real issue is that those who fail to adapt to AI will find themselves at a disadvantage. Their jobs are more likely to be taken not by machines, but by other individuals who are leveraging AI to be more effective at their work. In this sense, AI is a net job creator, rather than a job destroyer.
Coming back to AI enterprise solutions, how do you tie down the value proposition to pricing?
The viability of an AI system ultimately depends on its ability to add value; if it doesn't, it simply won't be used. Today, the preferred purchasing model for software is consumption-based usage (CBU), similar to how cloud computing services are billed. This usage-based approach aligns with contemporary buying preferences and allows companies to tie costs directly to value generated. So, how do we quantify that value? The metrics are relatively straightforward. If an AI system improves production efficiency, reduces inventory costs, and minimises downtime—all while delivering higher quality products at lower prices—then its value becomes measurable. We can assess the increase in production and calculate the cost savings, which translates to direct value for shareholders.
So, there is no hard-selling RoI for an AI application?
Not necessarily. Take, for example, Shell, which uses C3.ai applications. They are seeing an economic benefit of $2 billion this year alone from various applications they've deployed. Similarly, the U.S. Air Force is using predictive maintenance applications, generating billions in savings each year—a figure that they can directly measure. We also have clients focused on production efficiency, achieving measurable increases in output, and therefore, economic benefit. Another application is in CRM systems where C3.ai helps to reduce customer churn. The system identifies customers who are likely to discontinue their business, allowing for timely interventions to retain them. So the economic benefits are not only measurable but also quantifiable. The success of each of our projects is gauged by the additional economic value that the customer gains each year from using our product.
Given that AI by its very nature is self-learning and evolutionary, will C3.ai transition from being an 'AI solution provider' to a 'facilitator' that merely oversees AI systems that design and innovate on their own?
At present, we are utilising well-understood techniques such as self-learning algorithms and reinforcement learning across sectors such as banking, manufacturing, aerospace, telecommunications, and healthcare. These techniques offer tangible, lasting benefits to shareholders. So that's real – it’s not ephemeral. The potential for growth is immense—likely two to three orders of magnitude larger than what we anticipate for the next decade. So, the upside is clear, and I don't think there's much concern about the technology becoming obsolete or irrelevant. However, the downside is equally apparent. Governments, such as the European Union, are already attempting to regulate AI, but the issue is complex. The EU's proposed regulations are a case in point: a 58-page document that's nearly incomprehensible even to those proficient in the field. It's evident that the persons who wrote the document themselves have fully comprehended the complexities. This vagueness will likely lead to years or decades of litigation and keep a lot of bureaucrats busy. So, while regulation is needed, finding the right approach is a complex problem. If we don't navigate this carefully, we're setting ourselves up for a lot of future discontentment.
In five years or a decade from now, where will C3.ai be?
I'm confident that in five years, C3.ai will be the world's leading provider of enterprise AI applications. This will be true not just in one region, but across Asia, Europe, North and South America, and in every industry imaginable.