At the end of March, many celebrities including Musk jointly signed an open letter, hoping that institutions including OpenAI would suspend the research and development of more powerful AGI for at least 6 months, so that human society can reach some kind of agreement based on the threat of AGI (artificial general artificial intelligence). Although the text of this letter is not long, it covers a very wide range of content and contains rich issues. For example, how should humans calculate this account - AGI fills the information channels with propaganda and even lies; if it is a job that humans like and are satisfied with, should it be replaced by AI? And are these efficiency improvements worth the risk of "losing control of civilization"? And how should humans build a regulatory system for AGI? The ultimate result of this letter was that it triggered a discussion of unprecedented opinion division in Silicon Valley. After all, if we take GPT3.5 entering the human society as the starting point, there are still too many blind spots and disputes that have not been resolved for the development of AGI. But while humans are still arguing, AGI has already begun to cause trouble around the world. False information and information leakage are like a huge black box that makes people uneasy. Therefore, in recent times, various information-developed countries including Europe, the United States, Japan and South Korea have begun discussions on the regulation of large models. On April 11, the Cyberspace Administration of China also issued the "Regulations on the Administration of Generative Artificial Intelligence Services (Draft for Comments)" (hereinafter referred to as the "Regulations"). The significance of this "Regulations" lies in that it can be regarded as one of the earliest official documents in human society's supervision of AGI. The regulatory perspective is often a very unique perspective for people to understand AGI: unlike the perspective of scientists, which has many complex explanatory systems on parameters and many complex model frameworks. Regulators are the spokespersons for social interests and often think from the perspective of society as a whole. Therefore, the thinking of regulators is often closer to the simple intuition of most people. But it also represents the thinking of the core expert group, so it can also partially respond to many discussions on new things. Therefore, this "Measures" can actually become a window for many ordinary people to understand AGI. It can help us better understand and participate in AGI discussions. Because AGI is neither an angel nor a devil. And the disenchanted AGI will also become an important part of our public life. 1. “Who am I”: Generative Artificial IntelligenceFrom the perspective of Silicon Valley, AGI is roughly equivalent to a large model. It is a computer program with some characteristics of intelligent thinking, which is composed of optimized models, structural corpora and huge computing power. The "Document" classifies this "new species" into the "generative artificial intelligence" industry, that is, "technology that generates text, pictures, sounds, videos, codes and other content based on algorithms, models and rules." Obviously, this is from the perspective of the relationship between AI and society. After all, the technical features of AGI will change in the future, and the size of the model will fluctuate, but its core goal of ultimately pursuing interaction with society will not change. No matter how long a model is developed in a closed environment, its ultimate goal is to output content to society. This is actually the core difference between vertical AI such as decision-making assistance, retrieval, security, and payment and AGI. From a technical perspective, any "generative AI" with a broad user base is likely to have the underlying capabilities of AGI. For ordinary people, the term AGI (general artificial intelligence) is indeed a bit too "charming". Many people compare AGI with human intelligence, as if they have seen a "cyber soul", completely forgetting that AGI is just a hint of intelligence, and there is actually huge uncertainty in its future development. So rather than saying that our future is an "AGI society" like in "Westworld", it is better to say that we will usher in a world of "generative artificial intelligence", which may be closer to the technological reality of our generation. 2. “Who do I belong to”: Service ProviderThere is a classic question in the discussion of AGI ethics: For the content generated by the big model, does the copyright belong to the big model company or to the user who worked hard to write the prompt? Although the Measures do not clearly define the ownership of copyright, they do define the responsibilities and obligations in terms of supervision, which can be used as a reference for more people. According to the Measures: "Organizations and individuals (hereinafter referred to as 'providers') that use generative artificial intelligence products to provide chat and text, image, sound generation and other services, including supporting others to generate text, images, sounds, etc. by providing programmable interfaces, shall bear the responsibilities of producers of content generated by the product." If we follow this line of thinking, the person in charge is neither the R&D personnel of the big model nor the user, but the service provider that connects the big model with the user. Of course, in most cases, the AGI developer and the API provider should be the same entity. However, as the technology ecosystem evolves, entities at different levels may become diversified. Clarifying the responsibilities of the service providers in the middle is actually in line with the official definition of "generative artificial intelligence". At the same time, such a division of power and responsibility actually forces the upstream industry chain to achieve sufficient mutual trust in content in the future. 3. AGI’s “content rights”: identification is requiredA similar debate to the copyright issue is: Can AGI content have the “same rights” as content enjoyed by humans? The Measures clearly set restrictions on AGI content, which appear in two places: "In accordance with the Regulations on the Management of Deep Synthesis of Internet Information Services, the generated images, videos and other content shall be labeled." "Providers shall, in accordance with the requirements of the national cyberspace administration and relevant competent authorities, provide necessary information that may influence user trust and choice, including descriptions of the source, scale, type, quality and other aspects of pre-training and optimized training data, manual labeling rules, the scale and type of manually labeled data, basic algorithms and technical systems, etc." There has always been controversy about the content of AGI. Especially during the GPT internal test, the system sometimes behaved like an old man playing with his mobile phone at the entrance of a village. Sometimes it directly gave users some opinions that contained value judgments, but there was not enough persuasive information to support them. If the "Measures" are implemented, AGI will completely bid farewell to the "words without evidence" and output content in a "flying train" style, and instead derive some search-like tool attributes. And things like taking AI synthetic works to art competitions to win awards in the past will completely become a "black history." This is actually in line with the spirit of the law. AI is absolutely powerful in content generation, so it naturally needs to bear more burdens of proof. Multimodal content may have huge content risks, so it naturally requires corresponding control mechanisms. On the contrary, if AGI content is given the same rights as human content, it may have an impact on human content ecology that is difficult to assess. IV. “Regulatory Ecology”: Asilomar PrinciplesThere is a small detail in the open letter to OpenAI at the beginning, which proposes that there should be an audit ecosystem rather than a system. The open letter also mentions the Asilomar AI Principles, which states that the impact of advanced AI on humans will be civilization-level, so the resources required for its planning and management should match the resources it consumes. In other words, if generative AI is going to be a huge system, then it cannot be fully regulated by just one link or one subject. In addition to continuing to emphasize the regulatory status of existing laws, the Measures also emphasize the full-process supervision of generative AI. For example, the corpus ("pre-training data") must be legal and compliant, data labeling must be trained with "clear, specific, and operational labeling rules", usage scenarios must comply with regulations and assume corresponding responsibilities, the content itself needs to be labeled, and there must be clear reporting and feedback channels during user use, etc. Since AGI will become a huge ecological foundation, regulators actually need to use more diversified methods to perform regulatory tasks. Therefore, although the big model has certain black box attributes, through a set of combined punches, it is enough to reverse the compliance capabilities behind the technical black box, thereby achieving the purpose of "opening the box". In order to further clarify the responsibilities of developers, the following clause has been added to the Measures: "For generated content that is discovered during operation or reported by users and does not comply with the requirements of these Measures, in addition to taking measures such as content filtering, it should be prevented from being generated again within three months through model optimization training and other means." In other words, if the regulator simply wants to use "content filtering" to evade further model optimization responsibilities, it is a violation of the Measures - AGI needs a compliant soul, not just a compliant API. 5. Taming FireMany people compare the invention of AGI to "fire". Humans have learned to make fire and enjoy the fruits of civilization from fire, but at the same time it also takes a longer time to "tame fire". In the open letter mentioned above, it is proposed that a good AGI should meet at least several standards: accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In other words, AGI must not become a metaphysical alchemy furnace-like black box as it is now. It should have a higher degree of certainty of benefit to mankind. Therefore, the open letter puts forward more visionary technical supervision ideas, such as establishing a professional vertical supervision system with a large computing power scale, reaching some kind of underlying code convention, etc. Whether these proposals are worth implementing requires more industry discussion. But one thing that is relatively certain is: just as humans understand "fire" better in the process of "taming fire", human society may only be able to understand AGI better through continuous interaction and competition with AGI. Author: Guo Haiwei Source: WeChat public account: "Pinwan (ID:pinwancool)" |
<<: In 2023, will the road for physical stores really be easy?
>>: The essence of digital marketing series 2: Which comes first, digital or marketing?
Vipshop launched the "Weihuo Plan" throu...
When opening a store on the Amazon platform, you w...
This article uses the popular AI face-changing app...
temu is a service platform focusing on cross-borde...
It is not easy to transform personal interests and...
"Memories of Childhood" is a very hot to...
Facebook, also known as "Facebook", has ...
Nowadays, we often hear people mention cross-borde...
In the ever-evolving business era, marketing strat...
What are the latest developments and trends of Xia...
As Double Eleven enters its 16th year, the competi...
Moutai Group sees youth as the key to the liquor i...
Shopee recently launched Shopee Barokah with a new...
Want to improve your monetization through IP? Befo...
In the e-commerce market in Africa, the jumia plat...