Giga ML wants to help companies deploy LLMs offline

Giga ML wants to help companies deploy LLMs offline

AI is all the rage — particularly text-generating AI, also known as large language models (think models along the lines of ChatGPT). In one recent survey of ~1,000 enterprise organizations, 67.2% say that they see adopting large language models (LLMs) as a top priority by early 2024.

But barriers stand in the way. According to the same survey, a lack of customization and flexibility, paired with the inability to preserve company knowledge and IP, were — and are — preventing many businesses from deploying LLMs into production.

That got Varun Vummadi and Esha Manideep Dinne thinking: What might a solution to the enterprise LLM adoption challenge look like? In search of one, they founded Giga ML, a startup building a platform that lets companies deploy LLMs on-premise — ostensibly cutting costs and preserving privacy in the process.

“Data privacy and customizing LLMs are some of the biggest challenges faced by enterprises when adopting LLMs to solve problems,” Vummadi told TechCrunch in an email interview. “Giga ML addresses both of these challenges.”

Giga ML offers its own set of LLMs, the “X1 series,” for tasks like generating code and answering common customer questions (e.g. “When can I expect my order to arrive?”). The startup claims the models, built atop Meta’s Llama 2, outperform popular LLMs on certain benchmarks, particularly the MT-Bench test set for dialogs. But it’s tough to say how X1 compares qualitatively; this reporter tried Giga ML’s online demo but ran into technical issues. (The app timed out no matter what prompt I typed.)

Even if Giga ML’s models are superior in some aspects, though, can they really make a splash in the ocean of open source, offline LLMs?

In talking to Vummadi, I got the sense that Giga ML isn’t so much trying to create the best-performing LLMs out there but instead building tools to allow businesses to fine-tune LLMs locally without having to rely on third-party resources and platforms.

“Giga ML’s mission is to help enterprises safely and efficiently deploy LLMs on their own on-premises infrastructure or virtual private cloud,” Vummadi said. “Giga ML simplifies the process of training, fine-tuning and running LLMs by taking care of it through an easy-to-use API, eliminating any associated hassle.”

Vummadi emphasized the privacy advantages of running models offline — advantages likely to be persuasive for some businesses.

Predibase, the low-code AI dev platform, found that less than a quarter of enterprises are comfortable using commercial LLMs because of concerns over sharing sensitive or proprietary data with vendors. Nearly 77% of respondents to the survey said that they either don’t use or don’t plan to use commercial LLMs beyond prototypes in production — citing issues relating to privacy, cost and lack of customization.

“IT managers at the C-suite level find Giga ML’s offerings valuable because of the secure on-premise deployment of LLMs, customizable models tailored to their specific use case and fast inference, which ensures data compliance and maximum efficiency,” Vummadi said. 

Giga ML, which has raised ~$3.74 million in VC funding to date from Nexus Venture Partners, Y Combinator, Liquid 2 Ventures, 8vdx and several others, plans in the near term to grow its two-person team and ramp up product R&D. A portion of the capital is going toward supporting Giga ML’s customer base, as well, Vummadi said, which currently includes unnamed “enterprise” companies in finance and healthcare.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechCrunch – https://techcrunch.com/2023/12/28/giga-ml-wants-to-help-companies-deploy-llms-offline/

Exit mobile version