Any organisation accessing AI models hosted in the cloud knows what a challenge it can be to ensure that the large volumes of data needed to build and train those type of workloads can be accessed and ingested quickly to help avoid any potential performance lags.
Latency is an issue that tends to be exacerbated when the collection and processing of datasets distributed across multiple geographical sources via the network is involved. It can be particularly problematic when deploying and scaling real-time AI applications in smart cities, TV translation, and autonomous vehicles. Taking those workloads out of a centralised data centre and hosting them at the network edge, closer to where the data actually resides, is one way around the problem.
The ML endpoints also feature built in distributed denial of service protection to help thwart cyber security attacks and keep applications up and running in the event of an incident. That's a crucial layer of cyber defence which aids compliance with various data protection rules and regulations, including the GDPR, PCI DSS and ISO/IEC 27001, says the company.