Snowflake Teams With NVIDIA To Deliver Full-Stack AI Platform For Customers To Transform Their Industries
SAN JOSE, Calif., – March 18, 2024 – Snowflake (NYSE: SNOW), the Data Cloud Company, today announced at NVIDIA GTC an expanded collaboration with NVIDIA that further empowers enterprise customers with an AI platform, bringing together the full-stack NVIDIA accelerated platform with the trusted data foundation and secure AI of Snowflake’s Data Cloud. Together, Snowflake and NVIDIA deliver a secure and formidable combination of infrastructure and compute capabilities designed to unlock and accelerate AI productivity and fuel business transformations across every industry.
“Data is the fuel for AI, making it essential to establishing an effective AI strategy,” said Sridhar Ramaswamy, Snowflake CEO. “Our partnership with NVIDIA is delivering a secure, scalable and easy-to-use platform for trusted enterprise data. And we take the complexity out of AI, empowering users of all types, regardless of their technical expertise, to quickly and easily realise the benefits of AI.”
“Enterprise data is the foundation for custom AI applications that can generate intelligence and reveal new insights,” said Jensen Huang, founder and CEO of NVIDIA. “Bringing NVIDIA accelerated computing and software to Snowflake’s data platform can turbocharge enterprise AI adoption by helping customers build, deploy and manage secure generative AI applications.”
Expanding on Snowflake and NVIDIA’s previously announced NVIDIA NeMo™ integration, Snowflake customers will soon be able to utilise NVIDIA NeMo Retriever directly on their proprietary data in the Data Cloud, all while maintaining data security, privacy, and governance seamlessly through Snowflake’s built-in capabilities.
NeMo Retriever enhances performance and scalability of chatbot applications and can accelerate time to value for the 400+ enterprises[1] already building AI applications with Snowflake Cortex (some features may be in preview), Snowflake's fully managed large language model (LLM) and vector search service. The expanded collaboration will also include the availability of NVIDIA TensorRT™ software to deliver low latency and high throughput for deep learning inference applications to enhance LLM-based search capabilities.
NVIDIA accelerated compute powers several of Snowflake’s AI products, including Snowpark Container Services, as well as:
- Snowflake Cortex LLM Functions (public preview): With Snowflake Cortex LLM Functions, users with SQL skills can leverage smaller LLMs to cost-effectively address specific tasks, including sentiment analysis, translation, and summarisation in seconds. More advanced use cases include the development of AI applications in minutes using performant models from Mistral AI, Meta and more.
- Snowflake Copilot (private preview): Snowflake’s LLM-powered assistant, Snowflake Copilot, brings generative AI to everyday Snowflake coding tasks with natural language. Users can ask questions of their data in plain text, write SQL queries against relevant data sets, refine queries and filter down insights, and more.
- Document AI (private preview): Document AI helps enterprises use LLMs to easily extract content like invoice amounts or contractual terms from documents and fine-tune results using a visual interface and natural language. Customers use Document AI to help their teams reduce manual errors and increase efficiencies with automated document processing.
Learn More:
- Register for Snowflake Data Cloud Summit 2024, June 3-6, 2024 in San Francisco, to get the latest on Snowflake’s AI innovations and upcoming announcements, here.
- Watch NVIDIA GTC on-demand sessions presented by Snowflake including how to build chatbots with a RAG architecture or stop by in-person for how to leverage LLMs for life sciences.
- Learn how organisations are bringing generative AI and LLMs to their enterprise data in this video.
- Stay on top of the latest news and announcements from Snowflake on LinkedIn and Twitter.