Embedded AI: the inside story
Most new technologies, when they first emerge, are treated as curiosities. It’s the nature of the beast. People are intrigued by technology in general and so, when a new technology comes out, fascination abounds.
But there’s a problem with that fascination phenomenon, and there’s an issue with putting a new technology on a pedestal. Doing so encourages practitioners and customers alike to romanticize the science of it. Every cool capability, every technical nuance is part of the technology’s mystique. And while that kind of romanticization is cool at the beginning, it inhibits the maturation process as interest in a technology picks up.
Heroes, not martyrs
In the starting-out phase the mystique is to be expected as, oftentimes, the only people who can implement a new technology are the people who are comfortable rolling up their sleeves and working with it at the lowest level: on the command line, controlling it with code and, in general, embracing all its messiness. We need trailblazers at the beginning. But the danger and typical outcome of these maiden voyage heroics is that practitioners, and even customers, can derive a belief system from them. These pioneers cultivate a kind of dogma, premised on the idea that only by working with the technology at the barebones level can a credible, robust implementation be created.
That the early adopters’ habits lead to this kind of chauvinism creates an ironic status quo: a group of technologists and tech enthusiasts, who live on the bleeding edge, are essentially…Luddites. Although they help establish a discipline, they end up impeding its progress. When the inefficiencies of working with the product in a craftsperson-like way can finally be surpassed, the experts often resist the repeatability and efficiency of working with it in a more automated fashion.
Bespoke can choke
In the world of Artificial Intelligence (AI) this growing pain is now very much in evidence. Skilled practitioners in this area are known as data scientists, and they love their knobs and dials. Being able to work in code, select machine learning algorithms through trial and error, and then tune parameter values for those algorithms slowly and deliberately, is viewed as an essential component in the process. It’s no accident that a few of the leading AI platforms refer to the projects in their development environments as “experiments.”
The problem with this approach is that it doesn’t scale. Technology isn’t cooking, and preparing stuff in “small batches” it is not, generally speaking, an Enterprise approach. Even in the culinary world, people who cook at home can follow recipes rather than create them. People don’t have to cook anyway, and they needn’t hire a chef. Restaurants exist, after all. Prepared meals exist too. Even frozen meals can taste good. Not everything has to be made from scratch.
The AI world’s counterpart to a prepared meal is a product with intelligent technology embedded, where machine learning is used behind the scenes to drive or enhance the product’s functionality. In this way, customers benefit from AI without having to understand it, or even know it’s there. On-staff expertise is no longer required and, certainly, all the rigors of doing machine language work, like algorithm selection and “feature engineering” become the responsibility of the software or service company responsible for the product itself.
That division of labor, where the vendor does the hardest work and the customer consumes the output of that effort, is how technologies become mainstream, have the greatest impact on business, and benefit the most people in the organization. And it’s advances in applied AI where analysts, journalists, customers and, indeed, vendors and product managers need to focus.
Management of data, by data
In the world of data management, the utility of applied AI is at least double that of the average scenario. That may be a strong statement, but consider that data is growing in volume at incredible velocity while its management is being regulated at an ever-growing rate. As the requirements grow (and grow) and the data grows with it, management and governance of that data cannot be done manually. The task is too gargantuan. But the substance of data management involves careful inspection and remediation, so how can it be carried out in any fashion other than a manual one?
In fact, AI is built for exactly such a situation: intelligent, judicious examination and execution, on an automated basis, at scale. Embedded AI is therefore the thing to search for in a data management product, including data governance, data prep and data discovery. Look for it in these products and check to see how “real” it is. Ask your vendor how they use AI and what functionality is driven by it.
The more you know about embedded AI in a product you’re considering, the better a purchasing decision you will make.