After more than a decade of providing a platform-as-a-service (PaaS) environment for building and deploying AI applications, C3.ai launched an initial public offering (IPO) in December 2020. Earlier this month, in partnership with Microsoft, Shell, and the Baker Hughes unit of General Electric, the company launched the Open AI Energy Initiative to enable organizations in the energy sector to more easily share and reuse AI models.
Edward Abbo, president and CTO of C3.ai, explained to VentureBeat why more fragmented alternatives to building AI applications that rely on manual processes not only take too long but also are, from an enterprise support perspective, unsustainable.
This interview has been edited for brevity and clarity.
VentureBeat: Where does C3.ai fit in the ecosystem of all things AI?
Edward Abbo: There are two key products that we bring to market. One is an application platform as a service that accelerates the development, deployment, and operation of AI applications. Our customers can design, develop, deploy, and operate AI apps at scale. It runs on Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform as well as on private clouds and in a customer’s datacenter. The other is a suite or a family of industry-specific AI applications. Manufacturing customers, for example, can subscribe to AI applications for customer engagement.
VentureBeat: C3.ai just launched an Open AI Energy Initiative alliance with Shell, Baker Hughes, and Microsoft. What’s the goal?
Abbo: The idea is that companies can develop their own AI models and applications and make them available via OSI in way that allows other companies to subscribe to them. This is the first AI marketplace for applications and AI models in that industry.
VentureBeat: Do you think organizations are struggling to operationalize AI?
Abbo: You often hear two things. Data scientists spend 95% of their time grappling with data. They need to access data from numerous different data stores and then [have] to unify that data. But an entity might be a person or a piece of equipment that has a different identifier in different systems. Almost all corporations are plagued with way too many systems, so their data is fragmented. Data scientists end up having to do that work. They need to unify data and normalize things based on time. They end up spending 95% of their time on data and data operations and only 5% of their time on machine learning. That’s obviously a huge inefficiency. It’s a great frustration for many data scientists.
The second thing is data scientists employ programming languages such as Python and R. They’re not computer scientists or programmers. They turn a model that they think has high value over to an IT organization that isn’t used to dealing with it. They need to figure out how to operationalize it and scale it. You can have two million machine learning models that you need to train, validate, put into operation, and then monitor for efficacy. After that, you might need to retrain that model or introduce another version into operation.
VentureBeat: How does C3.ai change that equation?
Abbo: We’ve flipped it by handling the data operations. The data scientists can now spend 95% of their time on machine learning and only 5% retrieving data. We’re able to remove the barrier of going from endless prototypes to actually scaling and putting AI models in production. These are the hurdles we remove to scale and achieve enterprise AI.
We provide a product called Data Studio to integrate and rapidly unify data from disparate sources. By serving up data and analytic services, the data scientist doesn’t have to worry about doing all that work. We provide business analysts with drag-and-drop canvases they can use to bring data in and experiment with machine learning models without programming. They can then publish AI models and data services to downstream applications that might invoke those services.
VentureBeat: We hear a lot about machine learning operations (MLOps) and data operations (DataOps). Will these two disciplines need to converge?
Abbo: MLOps and DataOps need to converge. We’ve really brought data operations, IT operations, machine learning operations, business analysts, and applications onto a single platform. Data engineers are focused on aggregating the data and serving it up. Data scientists then use that to create models and publish them. Business analysts can then plug into the machine learning model library using the tools of their choice.
VentureBeat: That’s basically a no-code tool. Does that mean you don’t need to be a rocket scientist to do AI?
Abbo: We accommodate both universes. If you’re a programmer, you can publish our microservices in programming languages. But if you’re a business analyst or citizen data scientist, you don’t need to program. You can simply drag and drop, connect, and actually reference some sophisticated algorithms through a user interface without programming. We use a technique that’s referred to as a model-driven architecture. We’re representing the semantics of the application in a way that’s independent of the underlying technology. As Microsoft and AWS or Google introduce new technologies, we can basically plug those into a future-proof application.
VentureBeat: Do you think that AI platforms will by definition need to be hybrid in the sense of providing a level of abstraction that can be used to manipulate data regardless of where it resides?
Abbo: I definitely agree. Companies still have the majority of their systems in their datacenters. Being able to write your applications in a way where they can initially be deployed on-premises and then, without having to rewrite them, be moved into a cloud is a huge value to customers.
VentureBeat: What AI mistakes do you see organizations routinely making?
Abbo: The first inclination of the CIO is how hard could this be. I’ll just unleash my programmers to develop this capability. And then it’s 12 to 18 months down the road, and then they figure out it’s enormously difficult to pull off because of all the components you need to orchestrate. Data unification from dozens, sometimes hundreds, of different systems is a really challenging problem.
It’s not just a relational database anymore. It’s a multiplicity of data stores. Then you need an event model that handles data in batch, micro-batch, streaming, in memory, or interactive memory. Then there is a plethora of tools that need to interoperate. Underneath that, you have data encryption, data, transposition, and data persistence. You have to orchestrate all that.
The sooner people figure out they need a cohesive platform to accelerate the development and deployment of these AI apps, the better. We’re not talking about one or two apps here. We’re talking about hundreds of AI apps that leverage the existing systems in a way that delivers enormous economic value to companies. CEOs want them deployed as soon as possible.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more