Tencent, in collaboration with Tsinghua University and the Hong Kong University of Science and Technology, has launched a new image-to-video model called “Follow-Your-Click.” The model is now available on GitHub (with the code set to be released in April), and a research paper has also been published.
The main features of this image-to-video model include local animation generation and multi-object animation, supporting various types of action expressions, such as head adjustments, wing flapping, etc.
According to the introduction, Follow-Your-Click can generate local image animations through user clicks and brief action prompts. Users only need to click on the corresponding area and add a few prompt words to animate the originally static areas in the picture, converting them into a video with a single click, such as making an object smile, dance, or flutter.
In addition to controlling the animation of individual objects, this framework also supports simultaneous animation processing of multiple objects, increasing the complexity and richness of the animation. Users can easily specify the areas and types of actions they want to animate through simple clicks and phrase prompts, without the need for complex operations or detailed descriptions.
SEE ALSO: ByteDance Restructures Gaming Business, Emphasizes Stability
Sign up today for 5 free articles monthly!
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Pandaily – https://pandaily.com/tencents-new-interactive-image-to-video-tool-follow-your-click/