Fresh from Google Cloud Next 2024, Randy Cairns, our SVP of Marketing, brings back insights from the forefront of AI innovation. Dive into Randy’s analysis as he unveils the emerging trends shaping the future of enterprise AI.
I spent last week in Las Vegas at the Google Cloud Next show. This was an impressive event and clearly Google Cloud has mojo and momentum. Of course, it was all AI all the time, which fascinates me and I see many similar themes in our business at Hyperscience. Following are a few of my observations from the conference.
1) Google Cloud has a compelling vision and comprehensive offering for AI. The combination of their compute infrastructure, Gemini, Vertex and the introduction of their six AI agents position Google very well to deliver a wide range of AI use cases.
2) The market as a whole seems to be in experimentation and investigation mode. Deploying and building AI in the enterprise is a challenging undertaking. There will be many science projects as organizations sort out how best to harness the power of AI. Clearly, AI promises to be transformative, but there are still challenges to be sorted out in terms of data quality, traceability, and accurate outcomes that deliver real ROI.
3) The humble “document” is back in vogue. Successful AI applications will depend on access to quality, relevant data, and documents are the information assets that flow through an organization, and represent the DNA of employees, customers, partners, and the market. Organizations require automated systems that can read, understand, and act on these assets. This is where Hyperscience comes in — we are uniquely positioned to deliver this understanding, to deliver AI applications that are grounded in enterprise truth, and based on the language of the customers’ business.
And, Generative AI has the potential to disrupt the entire database / warehouse, business intelligence, and enterprise business systems ecosystem. Equipped with critical business context and data contained in your organization’s documents, AI applications and LLMs will be able to deliver against a more trustworthy, real-time understanding of your business, market, and customers. This opens up questions on the role of legacy systems of record and ETL tools that may not have accurate, complete, or current data to feed the AI machine. Why go to a database when you can go directly to the document that your customer just sent you?
4) Everyone is riffing on RAGs. RAG and vector databases are becoming the standard for enabling organizations to bring the power of AI to their enterprise data. There will be a ton of investment in these technologies in the next few years. That said, their success will depend on access to pristine, accurate, enterprise data. Since bad data, incorrectly labeled or understood, will lead to exponentially bad AI outcomes.
5) You don’t need to take a helicopter to fly across the street. And in most situations, enterprises will not need a massive frontier model to run the use cases they are building. Many organizations will build their own sovereign models to power their AI strategy. We like the sovereign model concept as it speaks to the borders, governance, security, and controls that an enterprise can apply to their AI use cases. And in most cases, a narrow model with a smaller number of parameters is more effective (and much less costly) than relying on a giant model with over a trillion parameters.
This is an exciting time for everyone in our industry. The pace of change is profound, and the tech landscape as we know is shifting every day. I’d love to hear your thoughts on what you are seeing in the market, and what gets you excited (or concerned) about where the AI market is headed.