F5 and Intel Collaborate to Bring Advanced Application Security and Delivery Capabilities to AI Deployments
F5 has announced an exciting partnership with Intel to provide enhanced application security and delivery capabilities to AI deployments. By combining F5’s NGINX Plus offering with the powerful optimisation and performance tools from the Intel Distribution of OpenVINO toolkit and infrastructure processing units (IPUs), this joint solution delivers a secure, scalable, and high-performance experience for AI inference.
With the increasing adoption of AI in various industries, the need for efficient and secure AI inference is more critical than ever. The OpenVINO toolkit, which accelerates AI model inference, when integrated with F5 NGINX Plus, which offers robust traffic management and security features, fulfills this need effectively.
The OpenVINO toolkit simplifies model optimisation and allows developers to create scalable and efficient AI solutions with minimal code changes. On the other hand, NGINX Plus enhances the security and reliability of AI models by providing traffic management, SSL termination, and encryption features.
Intel IPUs play a crucial role in boosting performance by offloading infrastructure services from the host CPU, thus freeing up valuable resources for AI model servers. This optimisation ensures enhanced scalability and performance for both NGINX Plus and OpenVINO Model Servers (OVMS).
This integrated solution is particularly beneficial for edge applications like video analytics and IoT, where low latency and high performance are essential. By running NGINX Plus on Intel IPUs, the solution ensures reliable responses, making it ideal for content delivery networks and distributed microservices deployments.
Kunal Anand, the chief technology officer at F5, expressed excitement about the collaboration, stating, “Teaming up with Intel empowers us to push the boundaries of AI deployment. This collaboration highlights our commitment to driving innovation and delivers a secure, reliable, and scalable AI inference solution that will enable enterprises to securely deliver AI services at speed.”
Pere Monclus, chief technology officer of network and edge group at Intel, emphasized, “Using the cutting-edge infrastructure acceleration of Intel IPUs and the OpenVINO toolkit alongside F5 NGINX Plus can help enable enterprises to realise innovative AI inference solutions with improved simplicity, security, and performance at scale for multiple vertical markets and workloads.”
The integrated solution is now available. For more information, visit f5.com/intel. Additionally, a companion blog from F5 CTO Kunal Anand provides further insight on this offering. You can read the blog here.