批「信息茧房」者,困在了「认知茧房」里

批「信息茧房」者,困在了「认知茧房」里

It is human nature to be partial to information. Although the Internet is rich in information, it does not necessarily lead to a narrowing of horizons. Read this article to learn more.

Nowadays, there are two imported communication concepts that have been overused. One is "entertainment to death" and the other is "information cocoon."

These two words originally have their specific application contexts, but many people always use the searched terms to interpret and apply them to specific situations, using them as weapons for criticism. Under the guidance of "profundity fetish", they often wield the moral whip, opposing entertainment in the name of opposing "entertainment to death" and criticizing algorithms in the name of criticizing "information cocoons".

It has to be said that this scene is just entertainment to death, and it also shows that the "cognitive cocoon" really exists.

Take the "information cocoon" for example. These days, faced with the rampant extreme forces of populism, fan clubs, and gender confrontation in the name of patriarchy/feminism, many people tend to blame the information cocoon.

In the incident of Qin Lang losing his homework in Paris, there was a phenomenon of blind listening and believing, and the information cocoon was blamed; in the fat cat incident, there was a phenomenon of inter-group rift between "misogyny VS man-hate", and the information cocoon was blamed... After many social events triggered internal friction of public opinion, the "information cocoon" was often forced to bear everything.

In the eyes of some people, it is the "information cocoon" that traps people in an echo chamber, making them see what they want to see and hear what they want to hear, which leads to people becoming more and more self-righteous and extremely paranoid.

"When in doubt, think quantum mechanics; when attribution is unclear, think information cocoon" has become a fashionable way of summarizing. On the left, "How terrible is the information cocoon"; on the right, "The information cocoon is a secret trap that hunts cognition." The "information cocoon" is being beaten 1008610010 times.

What’s interesting is that when I searched for “information cocoon” online, I found that many marketing accounts that are accustomed to selling anxiety have begun to mix “information cocoon” with chicken soup for the soul rhetoric such as “high cognition” and destructive sentences such as “ruining a generation”, for fear that the public’s anxiety level has not yet exceeded the limit.

In today's world where the "post-truth era" is flying all over the place and "Internet Balkanization" is everywhere, it is normal to worry that people will lose their open-mindedness and ability to listen to both sides.

The question is, is the so-called "information cocoon" a real "cyber cage" or an imaginary straw man issue? When the end point of criticizing the "information cocoon" is anti-technological Luddism, is this definitely not being trapped in one's own narrow-mindedness?

01

A told B that A, and B told C that A said A+B, so C thought that A said B. This is a common misinformation scenario in information dissemination.

Now, the “information cocoon” has also encountered such a misconception. Tracing back its past and present, we can find that the “information cocoon” first proposed by Sunstein has been redefined today out of its original context.

18 years ago, when Sunstein proposed the term "information cocoon" in "Information Utopia", the Internet was still in the Web 1.0 stage. At that time, many great people including Negroponte predicted that the personalized "My Daily Me" would appear.

Sunstein is also among them. He believes that the Internet does provide the public with a "vast ocean of information", but people do not accept all the information they are exposed to, but selectively absorb it according to their personal preferences, which may trap themselves in a cocoon-like "cocoon".

Sunstein did link the "information cocoon" to the background of the Internet era, but he believed that the crux of the "information cocoon" lies in "information bias."

▲Sunstein’s “information cocoon” has actually been redefined today.

Information bias has something in common with "selective exposure" in communication and "confirmation bias" in psychology. In essence, it is a self-protection mechanism that relies on the "filter" of the human brain.

The contradiction between information explosion and limited brain capacity is a problem that people have faced for a long time, but there is no doubt that the advent of digital society has made it more prominent.

Jack Trout, the master of positioning, said in his book Differentiation that human society has generated more information in the past 30 years than in the previous 5,000 years. Liang Yongan also said that the knowledge and information acquired by today's young people in their teens may be more than what an ancient person knew at the age of 60.

In order to avoid information overload, the brain will automatically start information screening based on human nature of liking similarities and disliking differences, and seeking benefits and avoiding harm.

Because it is human nature, partial information consumption exists at all times, including the Web 1.0 era. This kind of information receiving mode, which is not to accept everything but to select information, can bring people into a cognitive comfort zone from a positive perspective; from a negative perspective, it can bring people into a cognitive rigidity state.

This reflects the two sides of information bias: Side A is the defense against excessive information load, which can help reduce brain pressure; Side B is the suppression of "different cultural perspectives", which can easily lead to limited thinking. In the context of information overload, the positive effect of Side A is actually greater than the negative effect of Side B.

The "information cocoon" that is popular on the Internet now emphasizes that the cocoon is the product of the Internet's information supply model and transmission path, believing that information technology is the cause and information bias is the result. Embedding algorithm recommendations and information cocoons in the causal chain is a common attribution.

This was probably something Sunstein had not expected: he attributed the formation of the "information cocoon" to information bias rather than information technology, and when he proposed the information cocoon, algorithm technology had not yet been widely used.

02

So, will the Internet narrow people's horizons? Will information technology exacerbate people's information bias?

At present, there is a lot of controversy. There is even debate in the academic community about whether the "information cocoon" situation exists.

Under the question "How to understand 'information cocoon'" on Zhihu, the big V answerer "Su Lun" said that the "information cocoon" we are accustomed to using is just a pseudo-concept.

He made several points in his post:

  1. Information bias is human nature. It started in the traditional media era. In the era without personalized algorithms, when we read newspapers and magazines, we would skip over anything we were not interested in.
  2. The Internet is not a sterile room, but a chaotic and integrated information environment. The Internet breaks the closed information situation.
  3. The consequences of information asymmetry are not that serious, but people are naturally overly wary of the changes brought about by new technologies when there is information asymmetry.

▲Screenshot of Su Lun’s Zhihu post.

There are many people in the academic community who believe that the "information cocoon" is a specious proposition, such as two well-known scholars in the domestic communication field, Chen Changfeng and Yu Guoming. They have both stated that the "information cocoon" hypothesis has never been confirmed in academic empirical research.

In my opinion, the "information cocoon" can easily resonate with the public because it touches the itch of many people: it is human nature to feel uncomfortable with the iteration of the information dissemination model of "mass communication-segmented communication-narrow communication" and to feel uneasy about the trend of tearing, arguing and spraying on the Internet. As a hypothesis, the "information cocoon" provides an explanatory framework for these emotions.

But its shortcomings lie in two points: 1. It is only a theoretical deduction without scientific evidence, and it assumes that users have only a single channel to access information; 2. It can easily cause misunderstanding, leading many people to mistakenly believe that the so-called "information cocoon" only appeared after the advent of the Internet - is it the Internet that gave birth to the "information cocoon", or is it the Internet that increased the visibility of the "information cocoon" that has already existed? I'm afraid this needs to be carefully distinguished.

The belief that there was consensus before the Internet and division after it is most likely just an illusion caused by false consensus bias.

In any case, whether the Internet has created an "information cocoon" should be considered from multiple dimensions rather than a single dimension: the hedging effect of complex structures such as information diversity and narrow-audience communication in the Internet era on "cocooning/breaking" makes the full proposition that the Internet causes "information cocoon" seem somewhat arbitrary.

Interestingly, research on "information cocoons" has always been a hot topic in China: data shows that as of February 6, 2020, Chinese scholars have published 584 articles on the theme of "information cocoons" in the CNKI database. During the same period, there was only one article on the theme of "information cocoons" published in the domestic Web of Science database, and very few articles discussed "echo chambers" and "filter bubbles".

Could it be that the "information cocoon" has also become "oranges grown in the south of the Huai River are oranges, while those grown in the north of the Huai River are tangerines"?

03

It is interesting to note that some people’s criticism of “information cocoons” often leads to witch-hunting on algorithm technology. They may not understand the principles of algorithms, but the “nipple pleasure” theory is their popular theoretical weapon.

The algorithm recommends based on interests and filters out heterogeneous content that is not of interest. Isn’t this the driving force behind the creation of “information cocoons”? This is their conventional understanding.

What they think of as an algorithm is this: the algorithm will push whatever opinions I like to me and help me block opposing opinions.

If the algorithm could speak, it would probably want to beat the drum to protest: I am not so stupid, and you are not so innocent.

Let me do some popular science: Algorithm models include collaborative filtering algorithms, supervised learning algorithms Logistic Regression, deep learning, Factorization Machine, GBDT, etc. Many people understand interest matching, which is not analyzed based on a single dimension. The semantic features of the content (keywords, topics, entity words), text similarity features, spatiotemporal features, user clustering, gender, age, location and other identity features, as well as filtering noise, hot spot penalties, time decay, penalty display, etc., will all become analysis indicators.

▲Several major models involved in recommendation algorithms.

The algorithm will not simply assume that someone likes item 1 and not recommend item 2 to him. Instead, it will use diverse information to cover people's diverse, fickle, and constantly increasing interests. After all, not everyone knows what they will be interested in, and even if they do, people's interests may change with time and place.

If you look closely at the algorithm's recommended content, you will find that the algorithm will not only recommend your favorite basketball player James, but also recommend Jordan, Curry, Yao Ming, Ronaldo, Li Na, Lin Dan, and more cultural, sports, entertainment and educational content may be pushed to you.

Users are afraid of receiving too much information, and platforms are also afraid of providing too much information, which is not conducive to long-term user retention, so platforms will continue to optimize their algorithms.

Google Chrome has launched an "Escape the Bubble" plug-in that can reversely recommend content that is positive and easy to accept based on the user's reading habits. The news app "Read Across the Aisle" has created a map of 20 news brands. When the user's reading habits tend to be biased towards one side, the program will suggest that the audience adjust the reading content. Douyin will strive to make the knowledge base and content pool an encyclopedic existence. With the billions of "vector features" accumulated through automatic learning of the algorithm model, it will explore the optimization of algorithm measurement indicators, and comprehensively use strategies such as content deduplication, fragmentation, and active exploration of users' diverse interests to avoid the presentation of information in a unified manner...

Moreover, algorithmic recommendations can often be "combined" with other forms of distribution: editorial distribution brings "what you should know", search brings "what you want to know", recommendation brings "what you may be interested in", and following brings "the dynamics of people you care about". The platform will integrate these information acquisition channels to solve the problem of information bias.

Recommending content based on "algorithm + hot spots + attention + search" has become a common practice for content platforms.

04

In his book The Inevitable, Kevin Kelly, a famous Internet scholar, proposed the idea of ​​an "ideal filter". He believes that an ideal filter should recommend "what my friends like, but I don't know yet", and "it will be an information flow that recommends something I don't like now, but want to try to like".

The current algorithms have already achieved this. The balance between improving the efficiency of information screening and avoiding the monotony of information presentation has become the "task progress bar" of algorithm loading.

Nowadays, algorithm technologies such as generation and synthesis, personalized push, sorting and selection, retrieval and filtering, and scheduling and decision-making have been widely used on information distribution platforms, short video platforms, e-commerce platforms, social platforms, and food delivery platforms. When we take a taxi, the navigation system will recommend the shortest or fastest route to us; when we order food delivery, the platform will present us with the highest-rated and closest restaurants...

Rather than helping us improve the efficiency of information matching, algorithms help us save the cost of information screening - in the face of massive amounts of information, algorithms meet our "urgent need to reduce the burden on our brains."

▲In the face of massive amounts of information, algorithms meet the "urgent need to reduce the burden on the brain."

In today's information diversification, algorithms are also working hard to adjust people's information reception mode to a "diversified" format.

An information platform is like a supermarket. It would rather encourage customers to consume more goods rather than just look at the same products every time they come.

Yu Guoming believes that intelligent algorithm recommendations have the essence of anti-information cocoon. "The social structure of information distribution platforms that use multiple algorithms can generally effectively avoid the occurrence of the 'cocoon effect' in terms of information flow." From the perspective of commercial interests, it is a better solution for algorithms to gradually tap into the untapped information consumption potential of individuals during updates and iterations.

Communication professor Yang Guang also concluded through empirical research: users and algorithms are always in a state of mutual response and mutual development. Algorithm technology creates many opportunities for news encounters and broadens users' information horizons.

It goes without saying that algorithms are amplifiers, and their amplification effect will restore the multifaceted nature of people at multiple levels in the two-way interaction of "people shaping the environment" and "environment shaping people."

The algorithm will amplify the power of beauty and goodness in resonance. The popular short videos of barbecue, Ice and Snow World, and Malatang have successively brought cultural tourism to Zibo, Harbin, and Tianshui, which is proof of this.

Algorithms will also expose the shortcomings of many people's path dependence in obtaining information, social decline in communicating information, and emotionality in digesting information. When both popular science content and earthy content are pushed to them, some people will just glance over the former and play the latter on a loop. However, too many people will not point the arrow at their own mental laziness, but will only let the algorithm take the blame. After all, it is easier to blame others than to blame yourself. Instead of making things difficult for yourself, it is better to blame the algorithm.

From this perspective, it is hard to say whether the "information cocoon" exists, but the "thinking cocoon", "social cocoon" and "cognitive cocoon" do exist... People will weave their own cocoons, and when they are entangled, they say that it is the "silk" spit out by the external environment.

05

This is not to say that algorithms do not have negative externalities, but rather that algorithms are not as fond of "creating cocoons" as some people imagine.

At present, using the so-called "information cocoon" to lash out at technologies including algorithms may be the wrong diagnosis and the wrong prescription - the real disease may be that in exaggerating the risks of new technologies, one is moving towards the anti-technology end.

Today, when Nick Silver said, "Algorithms are no longer just a part of cultural construction, but have become cultural practice itself. Algorithms cannot be understood only from the perspective of mathematical logic," it is narrow-minded to still view algorithms with a "monster-like" mentality.

What is truly scary is not the algorithm, but the single control and limited supply of information.

Algorithms are not without their shortcomings, but some people might want to listen to Kevin Kelly’s words:

  1. We rarely pay attention to the disadvantages of backward technology, but always worry about the risks that new technology may bring. Those who oppose cars on the grounds that cars will cause traffic accidents will pretend not to see the disadvantages of horse-drawn carriages.
  2. Problems brought about by technology can never be solved by reducing technology, but by inventing better technology.

The problem of information bias is not caused by technology, but it can be alleviated by technology: extracting dark knowledge and dark variables through environmental characteristics, getting rid of the coarse-grained classification of information needs, and guiding algorithms through the integration of interdisciplinary indicators to shape the user's "wide-angle" cognition...

From a fundamental and long-term perspective, information bias is a problem that is difficult to solve with a single technology or a single platform. The solution lies in providing a diversified information market, guiding the public to improve their information literacy, making good use of new technological tools and media, and broadening channels for obtaining information.

Don’t forget, even Sunstein himself said in “Information Utopia”:

New communication technologies are making things better, not worse.

Author: She Zongming

WeChat public account: Digital Force Field (ID: shuzilichang), resist entropy increase and salvage fun.

<<:  From "Wan Wan" to Wang Ma, the way out of deconstruction

>>:  Ten thousand words long article: How B-side Internet products can acquire customers through multiple channels

Recommend

How do Shopee promotions end? How to prepare?

If we open a store on Shopee, we need to master a ...

I opened an AI online store and sold out my "one cent" business

AI service is an industry with creative rules, emo...

How to reduce the weight of short video editing and secondary creation?

Secondary creation can give new life to the origin...

How do temu merchants withdraw cash?

As an emerging online market, temu has attracted m...

What is the Weee certification registration process? Process Introduction

When doing cross-border business abroad, if electr...

Do cross-border e-commerce companies recommend Shenzhen? How to open it?

In the context of globalization, the cross-border ...

Some post-00s speculate in stocks, while others speculate in “Valley”

In this article, we will take a deep look at the e...

eBay will update the seller protection policy of authentic transaction guarantee

On September 28, eBay US recently issued an announ...

ChatGPT's new feature is online. Can it help you edit videos now?

Recently, OpenAI finally released its big move, Co...

Recent observations: Fancy ways to pay for knowledge

Through the author's personal experience and o...