Reports 14 Min Read
The potential for a new breed of artificial intelligence applications to unleash new value is enormous, but IT organizations face substantial challenges to effectively implement this novel technology. Multiple factors—such as performance, scaling, and cost—are frequently cited as pain points, but there is one additional aspect that will ultimately determine the success or failure of any AI initiative: data.
The math is simple: The more of the right sort of data a model can be trained on, the better that model will be, and the better the ultimate outcome. To build truly differentiated capabilities, organizations will need to build massive data lakes on their own proprietary data sets. Those data sets will, of course, comprise their most recent data, but many organizations are also beginning to realize there’s an immense amount of potential value hidden within the vast amounts of historic, archived data that also exist within their environment.