China's tech landscape is buzzing with the latest advancements in generative video technology. Tencent, the tech behemoth renowned for its video gaming empire and WeChat, has just released an updated version of its open-source video generation model, DynamiCrafter, on GitHub. This move signifies the escalating efforts by China's tech giants to make a significant impact in the text- and image-to-video space.
DynamiCrafter, like other generative video tools, employs the diffusion method to convert captions and static images into short videos. The second generation of DynamiCrafter now produces videos at a pixel resolution of 640×1024, a significant upgrade from its initial 320×512 videos.
The team behind DynamiCrafter claims that their technology stands out from competitors by broadening the applicability of image animation techniques to more diverse visual content. This is achieved by incorporating the image into the generative process as guidance, a significant shift from traditional techniques.
A demo comparing DynamiCrafter with other models such as Stable Video Diffusion and Pika Labs shows the Tencent model as slightly more animated. While none of the models suggest that AI will soon be capable of producing full-length movies, generative videos are being hailed as the next big thing in the AI race, following the boom of generative text and images.
China's tech giants are not being left behind in this race. ByteDance, Baidu, and Alibaba have each released their video diffusion models, with Alibaba making its video generation model VGen open source, a strategy increasingly popular among Chinese tech firms aiming to reach the global developer community.
Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai
Comments