Maria Diaz/ZDNET
Google dazzled the world with its demo this month of its most cutting-edge generative artificial intelligence (AI) model, Gemini 1.5, a follow-up to the first Gemini model, which was released last December. Among other feats, Gemini 1.5 excels at things such as the “needle-in-a-haystack” challenge, where the model must identify a frame of video matching a text description.
However, Google’s model — like most AI models from the biggest commercial entities — contains little technical detail about how the software works. The 58-page technical report that Google released about Gemini 1.5 just contains general descriptions of the model and the approach used, without detailing the architecture from which Gemini 1.5 is composed. And, of course, the code is not available.
Also: Meet Gemini 1.5, Google’s newest AI model with major upgrades from its predecessor
In that sense, Gemini 1.5 continues a recent trend from Google and OpenAI and other commercial enterprises — obfuscating the technical details of AI.
That kind of secrecy presents an opportunity for open-source software that can match some of Gemini’s abilities while opening up access to its code.
In work published this month by Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel of University of California at Berkeley, and described on the project’s GitHub site, the scientists adapt Meta’s open-source Llama 2 large language model to create a multi-modal model that, like Gemini 1.5, can process not just text but also video and imagery, although not audio (unlike Gemini 1.5).
Also: GPT-4 is getting significantly dumber over time, according to a study
Using the mainstream version of Llama 2, a not particularly large 7-billion-parameter neural net, the authors were able to handle input of up to one million “tokens”, which is the text, image, or video fed into the model. This number represents a dramatic increase from the 128,000 handled by the Gemini 1.0 version and OpenAI’s GPT-4 Turbo.
Their creation, known as Large World Model (LWM), performs tasks similarly to Gemini 1.5. It can solve a needle-in-a-haystack type of problem, such as answering the request, “What color jacket was the girl on the trampoline wearing?”, when fed a one-hour YouTube video:
U.C. Berkeley’s Large World Model can answer a “needle-in-the-haystack” question about a particular moment in video better than Google’s Gemini 1.0 or OpenAI’s GPT-4 Turbo.
UC Berkeley
Liu and team haven’t yet shown how their results compare to Gemini 1.5. Instead, the team show comparisons with GPT-4 and Gemini 1.0.
As shown in the illustration above, LWM answers the needle-in-haystack question correctly, while the other two fail.
LWM can hold chats about what’s going on in a video clip, and give lengthy discussions about the contents of images, which is a process the researchers call “image chat”. LWM can also generate images and videos when supplied with text descriptions in the prompt (see both examples, below):
UC Berkeley UC Berkeley
Strikingly, it appears possible that Liu and team were able to achieve results equivalent to Gemini 1.0 with less computing power. The LWM was trained on one slice of a TPU Version 4 “POD”, consisting of 256 TPU chips, with two cores apiece, for 58 hours. In the case of Gemini 1.0, the technical report, just like the technical report for 1.5, contains few technical details about the infrastructure for training. All we know is that Google used some amount of TPU Version 4 and Version 5 PODs for a certain amount of time. It is quite possible they used a much larger amount of computing than Liu and team did for training LWM.
So, how is LWM — which is based only on a relatively small, open-source model, running on less computing power — able to achieve similar results to Gemini 1.0? Well, LWM is the product of a different kind of approach to the problem of how to develop a neural network.
Both models start from using a similar kind of neural net, a Transformer. Google added “innovations in training algorithms, dataset, and infrastructure” to the Transformer.
Also: How Google and OpenAI prompted GPT-4 to deliver more timely answers
In the case of LWM, Liu and team trained the model in multiple successive rounds, with increasingly large “context windows”, which is the amount of data samples the model works on at each pass. The team started with 32,768 tokens in the context windows, which you can think of as multiple pieces of data. They then worked up to one million tokens.
That approach is called “Ring Attention”, and was developed last year by Liu and team. The insight in Ring Attention is that you can train a neural network on samples of data concurrently, rather than sequentially, to parallelize the training, which means getting more done in less time, and utilizing the chips more efficiently.
The architecture of LWM.
UC Berkeley
“We adopt a training approach […] where our model is trained on progressively longer sequence lengths, starting from 32K tokens and ending at 1M tokens in increasing powers of two,” write Liu and team.
“Intuitively, this allows the model to save compute by first learning shorter-range dependencies before moving onto longer sequences. By doing this, we are able to train on orders of magnitude more tokens compared to directly training on the maximum target sequence length.”
LWM is trained on sequences of data of increasing length.
UC Berkeley
The data used to train LWM includes some of the most prominent data sets that have been put into the wild, including Books3, which is at the heart of controversy over copyright infringement. The researchers also used Video Instruct-100K, a “video conversation dataset” hosted on GitHub.
Google didn’t disclose Gemini 1.0’s training data, but merely describes it as such: “Gemini models are trained on a dataset that is both multimodal and multilingual. Our pretraining dataset uses data from web documents, books, and code, and includes image, audio, and video data.”
Also: AI will unleash the next level of human potential. Here’s how
While Google has already moved forward with Gemini 1.5, which can handle as many as 10 million tokens in its input, Liu and team believe Ring Attention can “theoretically extend to an infinite context, bounded only by the number of devices available.”
They continue: “We believe that our released model will provide a foundation for future work on developing longer context models, as well as encourage more challenging benchmarks that contain difficult long-range tasks that require higher levels of synthesis, rather than pure fact retrieval.”
The code of LWM is posted on the research team’s GitHub site.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : ZDNet – https://www.zdnet.com/article/move-over-gemini-open-source-ai-has-video-tricks-of-its-own/#ftag=RSSbaffb68