Google has released a family of “open” large language models named Gemma, that are compact enough to run on a personal computer.
Gemma comes in two sizes: two billion parameters and seven billion parameters. The larger version is intended for GPU- and TPU-accelerated systems, while the smaller one is billed as suitable for CPU-based on-device applications – even laptops. The architecture of both is similar and “share[s] technical and infrastructure components” with Gemini – the Chocolate Factory’s latest and most powerful large language model.
In benchmark tests assessing reasoning, math, and coding skills, the larger Gemma model outperformed Meta’s Llama 2 – despite being smaller than its 13-billion-parameter rival. The Gemma models were trained primarily on English text scraped from the internet that had been filtered to minimize toxic, inappropriate language, or sensitive data like personal identifiable information.
Google tweaked the models using instruction tuning and reinforcement learning using human feedback to improve its responses. It has also released toolkits that support fine-tuning and inference in different machine learning frameworks – including JAX, PyTorch, and TensorFlow through Keras.
The models are small enough to run on a local device rather than big iron in the cloud, and can be adapted for specific use cases like summarization or retrieval-augmented generation to create custom chatbots.
To be clear, Gemma isn’t technically an open source model. Google didn’t release the source code and data that would allow developers to train the model themselves. Only the pre-trained models and their weightings are accessible.
Google open sources file-identifying Magika AI for malware hunters and others
Google debuts Gemini 1.5 Pro model in challenge to rivals
Google silences Bard, restrings it as Gemini with optional $20-a-month upgrade
Opinions are divided over openness in AI. On one hand, it allows developers to tinker and explore the technology. On the other, as with any tech, miscreants could abuse it for nefarious purposes. The US Department of Commerce’s National Telecommunications and Information Administration (NTIA) is seeking public comments on the issue.
“AI is an accelerator – it has the potential to make people’s existing capabilities better, faster, and stronger,” secretary of commerce Gina Raimondo declared. “In the right hands, it carries incredible opportunity, but in the wrong hands, it can pose a threat to public safety.”
The NTIA wants to examine how “open-weight” models like Gemma might impact society or national security. Experts fear that developers could use these systems to generate fraudulent spam, launch disinformation campaigns, or develop malware.
The researchers from Google who developed the Gemma models appear to be aware of the risks. They concluded in a paper [PDF]: “We are confident that Gemma models will provide a net benefit to the community given our extensive safety evaluations and mitigations; however, we acknowledge that this release is irreversible and the harms resulting in open models are not yet well defined, so we continue to adopt assessments and safety mitigations [proportional] to the potential risks of these models.” ®
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Register – https://go.theregister.com/feed/www.theregister.com/2024/02/22/google_gemma_llms/