By
-
Antony Adshead, Storage Editor
Released: 06 Dec 2024
In this podcast, we take a look at storage and expert system (AI) with Jason Hardy, primary innovation officer for AI with Hitachi Vantara.
He speaks about the efficiency needs on storage that AI processing brings, however likewise highlights the severe context changing it can lead to as business are required to pivot in between training and inferencing work in AI.
Hardy likewise discusses a future that possibly consists of agentic AI– AI that develops its own workflow and takes choices for itself– that will likely lead to an even higher boost in work context changing.
Antony Adshead: What requires do AI work put on information storage?
Jason Hardy: It’s a two-dimensional issue. Certainly, there is that AI requires speed, speed, speed, speed and more speed. Having that level of processing, particularly when discussing developing LLMs and doing fundamental design training, it [AI] requirements exceptionally high efficiency abilities.
That is still the case and will constantly hold true, particularly as we begin doing a great deal of this things in volume, as we begin to trend into inferencing, and RAG, and all of these other paradigms that are beginning to be presented to it. The other need that I believe is– I do not desire to state neglected, however is under-emphasised– the information management side of it.
How do I understand what information I require to bring and present into my AI result without comprehending what information I in fact have? And one might state, that’s what the information lake is for, and actually, the information lake’s simply a huge disposing ground in a great deal of cases.
Yes, we require exceptionally high efficiency, however likewise we require to understand what information we have. I require to understand what information applies for the usage case I’m beginning to target, and after that how I can properly utilize it, even from a compliance requirement, or a regulative requirement, or anything like that from those styles.
It’s truly this two-headed dragon, practically, of requiring to be exceptionally performant, however likewise to understand precisely what information I have out there, and after that having appropriate information management practices and tools and so forth all twisted around that.
And a great deal of that concern, particularly as we take a look at the disorganized information side, is extremely vital and ingrained into a few of these innovations like things storage, where you have these metadata functions and things like that, where it provides you a bit more of that detailed layer.
When it comes to conventional NAS, that’s a lot more of an obstacle, however likewise a lot more of where the information’s coming from. It’s, once again, this double-sided thing of, “I require to be incredibly quickly,