Overnight, almost every industry wanted to have some connection with AIGC, whether it was simply to promote the topic and boost the stock price, or they really wanted to catch up with this wave of enthusiasm and reshape the industry landscape. Starting from 2023, AIGC has begun to transform various traditional industries. From the perspective of business history, the film and television entertainment industry has not lagged behind in the acceptance and application of new technologies, and has even been at the forefront in many cases. At a time when it is being impacted by entertainment forms such as games, the video industry will obviously not sit idly by and watch the rise of AIGC, especially in the context of Chinese and foreign companies eagerly pursuing "cost reduction and efficiency improvement". If AI can contribute a little, why not? In fact, just as ChatGPT and its successors dominated the news, Netflix released an experimental animated short film, "Dog and Boy", on January 31. The plot of this work is not very innovative, and the experimental nature of the whole work lies in the final cast list, from which it can be found that many parts of this animation were not created by real people, but by AIGC. Of course, for the long-video film and television industry, which is more heavy industry and fine division of labor, AIGC is still in its infancy in terms of both capabilities and tool richness, and it may take some time before it can be finally applied to real large-scale projects. But for more lightweight PUGC and UGC content, a number of new tools have emerged for creators to use, from using ChatGPT's capabilities to write scripts to using AI video tools for one-click generation. The most direct impact of AIGC on video creation is naturally the improvement of efficiency, especially in post-production. When scripts, shooting and even editing are no longer the threshold for video creators, the ultimate competition for video creators is more extreme brainstorming. For platforms, the growth rate of content with the help of AIGC tools may be exponential. Both the platform algorithm's ability to attract attention and the review mechanism for high-risk content will face greater challenges. AIGC can produce without sleep, but users' mortal bodies still need rest. 1. An “AIGC test bomb” in streaming mediaOn the last day of January, the first AIGC animated short film "Dog and Boy" jointly created by streaming giant Netflix, Microsoft and WIT STUDIO was officially released. Calling it AIGC is actually a bit suspicious of riding on the hype, as the animation is not completely generated by AI, which is only responsible for drawing some animation scenes. From a plot perspective, the story told in this less than 4-minute animation and the images shown are relatively simple. The animation shows many long-range shots, such as Mount Fuji and the Shizuoka coastline while the train is running, and the waterside village and towering peaks behind the protagonist as he goes down the mountain. There are hardly any complicated transition animations or difficult action scenes, and anyone with some experience in following anime will easily find that this animation is not very sophisticated in terms of style or background presentation, especially since it has only been achieved at this level in such a short time. It is currently at an entry-level level. However, this rough feeling of entry may be precisely the highlight of this animated short film. After all, this is an animation assisted by basic AIGC tools. Stills from "Dog and Boy" The biggest difference between "Dog and Boy" and many AIGC works is probably that it reduces the presence of AI very well . If the production team had not specially separated the AI creation shots at the end of the film, it would be difficult for anyone to tell the difference. At the end of the animation, the production team revealed the general process of AI-assisted scene production. Generally speaking, the process can be divided into four steps: hand-drawn scene, primary AI generation, secondary AI generation, and final draft (manual modification). That is, the animator first draws a rough scene by hand, then gives it to AI for secondary generation, and finally the animator makes the final modifications based on the AI generation. This process basically involves completely handing over the tedious steps 2 and 3 in the animation background production to AI. Human intervention only needs to be done in the initial creative and final draft stages, thus saving the animator's manpower to the maximum extent. The sudden release of such an experimental short film in early 2023, when AIGC unexpectedly became popular, was indeed like dropping a powerful bomb on the entire animation and streaming media industry. Originally, the outside world was skeptical about AIGC's ability to intervene in the industrialized film and television process, but Netflix had already secretly communicated with Microsoft and started to try it out in this regard. However, compared to the audience watching the fun and the excited capital market, many industry workers who were already anxious about the emergence of AIGC have expressed their disgust with the participation of AI in the animation on social media. Especially in the case that the Japanese animation industry has long been famous for exploiting grassroots painters, now the platform not only does not consider improving the treatment of creators to improve the overall level, but instead begins to hope to use AI to replace the original manual work. Opposition to the use of AI by overseas audiences and industry insiders For now, the conflict between labor and capital has diluted the more cutting-edge technological exploration of AIGC itself. In fact, manpower shortage has indeed long affected the production of Japanese animation, especially as more and more streaming platforms begin to have a large demand for anime works. From the perspective of platform production efficiency, the Netflix animation team is naturally very optimistic about the model of collaborative creation between humans and AI. From the perspective of more technical details, Netflix is already planning to develop more general AIGC creation tools. In an interview with Business Insider Japan, Daiki Sakurai, the general producer of Netflix animation, mentioned that the AIGC used in "Dog and Boy" this time is not a general tool but is developed completely from scratch in cooperation with rinna. Due to copyright reasons, all the materials for its training are from previous Netflix original series. It is for this reason that the amount of training is far from enough to make a high-quality animation. Sakurai also talked about their future vision: "I think it would be a good idea to collect the background original paintings created by all animation companies in Japan and create an AI about the background original paintings. It will not belong to any one company, but will become a common asset of the Japanese animation industry. Maybe we should do something like this to share our knowledge and technology with everyone." Obviously Netflix still wants to go the open source route rather than develop everything in isolation. Yiyuguancha has already mentioned in the article "Who will stand out in the battle for the Chinese version of ChatGPT" that the training of any large-scale AI model requires extremely high financial and time costs. Without broader cooperation, it will be difficult for Netflix alone to ultimately produce a general-purpose AIGC tool with practical value , let alone truly transform it into its own content production. Judging from the richness of training materials, Disney, with a century-old history, may have greater potential for developing dedicated AIGC, especially since it began exploring the technology of using models to convert text into videos a few years ago, and further used the model to automatically generate animations. Just recently, iQiyi also announced that it will be involved in the "Wenxin Yiyan" AIGC tool that Baidu will launch later. However, it is still worth observing in the long term what changes a simple text content generation tool can bring to the content production of long video platforms. Compared to the film and television industry, which has to deal with the audience's high demands for content refinement and the resistance of behind-the-scenes creators to AIGC, the PUGC and short video industries obviously have fewer burdens, especially for the latter. Even before the AIGC concept became popular, the widespread use of pan-AI tools has actually been working hard to lower the threshold for short video production , and the arrival of AIGC has undoubtedly added new fuel to the short video platform. 2. Short videos should not be limited to AI dubbing, mass production of AIGC is just around the cornerUsers who have been immersed in Douyin and Kuaishou for a long time must be familiar with the voices of AI dubbing masters such as Xiaomei, Xiaoshuai, and Sangbiao. AI-generated dubbing can be said to be the most relied-on tool for various short video content at present, but with the continuous expansion of AIGC tools, it is clear that AI can bring more than just dubbing to short videos. After the popularity of ChatGPT-type tools, many people have found that its current capabilities are fully capable of creating video scripts in simple scenarios. Once there is a script, the remaining work is actually shooting, editing, dubbing, etc. If AIGC tools can be used for each of these, wouldn’t it be possible to automatically generate short videos with AI alone? In exploring how to generate videos directly from text or images, many major Silicon Valley companies have already come up with experimental products. Last October, META announced a video AI tool called "Make-A-Video". Simply put, the tool generates continuous images through AI and then links these images into a video. However, judging from the final product, the short video generated by Meta seems too monotonous, and more importantly, the resolution is really limited. As a major AI company, Google is very advanced and chose to directly build AI text-to-image models that can generate ultra-long and high-resolution text. It came up with two at once: Imagen (based on a large language model and the industry's popular diffusion model), and Parti (based on Google's own Pathways framework). The former can further improve the resolution of AI text-generated images, while the latter relies on large language models to generate video content with complex plots. At present, the products provided by these two large companies are only of academic value, not practical. Because large companies still have a lot of considerations in the development of AIGC tools. If the product is not polished well enough, it may happen that the Google version of ChatGPT failed a few days ago, causing the market value of hundreds of billions to evaporate. Startups do not have these concerns. A few days after ChatGPT became popular, the platform tool QuickVid for generating short videos with one click came out. There are indeed many bugs, but it can be said that this tool is a super hodgepodge of various AIGC open source projects. Relying on the text generation function of GPT-3 to generate short video scripts, it then automatically extracts or manually enters keywords from the scripts. Based on these keywords, it can call background videos from the Pexels library for free, while superimposing the text-to-image generated by DALL-E 2, and calling Google Cloud's text-to-speech API to add synthesized voice-overs and background music from the royalty-free music library on YouTube. Relying on various ready-made AIGC tool interfaces and a large number of free material libraries, Daniel Habib, the founder of QuickVid, spent a few weeks to successfully launch this product. The key is that its user experience is indeed "foolproof". For example, you only need to enter a word - cat, and click Generate to present a 48-second rough cut version in three minutes. If you choose to output directly without making any adjustments, this tool can even help you come up with a title and introduction for the short video as well as various keywords that need to be tagged. QuickVid Screenshots The whole process takes less than ten minutes. Although the platform currently only supports users uploading videos to short video platforms after downloading them, the website states that one-click uploading to YouTube and TikTok is also under development. From a product perspective, QuickVid is still in a very early stage. It dares to charge a monthly subscription fee of US$10 with only one core function and it is still a Beta version. It is also a bold move. For the domestic AIGC track, which is accelerating rapidly, the emergence of similar tools is obviously only a matter of time. For example, the mobile version of ByteDance's editing tool Jianying has long had an AI image-to-video function. Next, as long as it is connected to some relevant AI models, it can easily transform into a short video AIGC tool similar to QuickVid. On the other hand, for short videos that rely more on algorithms rather than content quality, AIGC tools that are fast, efficient and economical are undoubtedly beneficial to both creators and platforms. The former can spend more time on various extreme creative ideas, while the latter will of course be happy to see the explosive growth in the amount of content on the platform. Of course, all of this is based on the premise that the AIGC tool has been polished and perfected. In fact, whether it is AIGC creation of long videos or short videos, the most promising area at present is probably B-side commercial delivery , especially those brand companies that used to need to spend a lot of manpower, material resources and financial resources to produce video content for video advertising. Whether it is consumer goods or game companies, the emergence of AIGC video creation will undoubtedly enable social media marketing "buying volume" to "reduce costs" to the greatest extent. As for whether it can eventually "increase efficiency", it still requires continuous trial and error and waiting for user testing. AIGC Application View Artificial Intelligence Generated Content (AIGC) White Paper (2022) As for copyright issues and combating false information, it may be worth discussing only after domestic ChatGPT products such as Baidu appear. As of now, AIGC has already begun to transform various industries. The video industry, which was considered to have a high threshold in the past, will inevitably not be able to escape this fate. Whether it is a long video or a short video, only creativity will ultimately become the real threshold. After all, this is the only link that AI cannot currently take over. *Reference article: Geek Park "A tool for generating short videos with one click is here, with just one paragraph" Author: Great Entertainment Source: Yiyuguancha (ID: yiyuguancha), the telescope and sonar of the pan-entertainment industry. |
<<: Tell you from the market dimension: What should you sell when starting a business?
>>: Why am I not good at self-media?
Alibaba International Station is the territory of ...
Recently, the Oriental Selection App launched cult...
When many friends open a store on the cross-border...
Introduction: As operators, we often encounter sit...
After opening a store on Shopee, many people do no...
This article mainly discusses the in-depth analysi...
While adults are chatting, children are watching s...
The small site that started out as a two-dimension...
As an Amazon seller, the importance of advertising...
Shopping on Amazon is very good, especially in ter...
With the advancement of globalization and the incr...
This article proposes that the core of private dom...
Selling products on Amazon without a trademark is ...
As a cross-border e-commerce platform, Shopify has...
As a part of lifestyle, young people have a strong...