AI is a standard feature of future SaaS, but it is not a panacea

AI is a standard feature of future SaaS, but it is not a panacea

In recent months, AIGC (Artificial Intelligence Generated Content) has been very popular, and more and more SaaS companies are using AI as a selling point for their products to tell stories to customers and investors.

It seems that no matter what SaaS you do, you have to add AI to demonstrate your advancement.

This scene reminds me of 2018, when blockchain was very popular, so many people tried every means to integrate blockchain into their products, and many "SaaS + blockchain" concepts appeared in the market.

In my opinion, AI and SaaS have a very good fit:

Training AI requires a large amount of data, and SaaS will connect to a lot of data in business scenarios. This data can help AI develop better. After AI advances, it can further empower SaaS, create greater user value, and obtain more data.

Almost all the world’s well-known SaaS companies have incorporated AI capabilities into their products, such as Salesforce, Shopify, HubSpot, Atlassian, Zoom, etc. [1].

Therefore, AI is a standard feature of future SaaS.

However, AI is not a panacea. It has its own advantages and limitations. We should use AI in the right places instead of adding AI as a selling point to every scenario.

So today I want to talk to you about this article:

  • In the SaaS field, which scenarios are suitable for using AI? Which scenarios are not suitable for using AI?

  • What should we pay attention to when using AI?

# Which scenarios are suitable for using AI

First, AI is suitable for handling specific and repetitive procedural tasks.

For example, in the field of e-commerce customer service, most of the questions asked by customers can be answered by following the standard document process, and few people are willing to answer these repetitive questions every day.

Therefore, this type of problem can be solved well by combining AI with intelligent customer service SaaS products.

For example, in the financial field, processing invoices and bookkeeping are repetitive and trivial but necessary tasks. These tasks themselves are very boring for employees, and manual processing is prone to errors.

At this time, the relevant work can be completed through AI's image recognition and language processing functions, which not only improves the efficiency and accuracy of the enterprise, but also allows employees to spend their time on other more valuable things, killing two birds with one stone.

Second, AI is suitable for jobs that require processing massive amounts of data.

For example.

The AfterShip Tracking product can help e-commerce sellers provide express tracking services to their store consumers.

What we originally did was very simple:

Connect to logistics data from all over the world and tell users their express delivery progress.

But what consumers really care about is not "where is my express delivery", but "when will my express delivery be delivered". At the same time, not every logistics provider can provide an accurate estimated delivery time.

Therefore, AfterShip Tracking uses AI technology to intelligently calculate the estimated delivery time of each item based on 10 years of accumulated express logistics data. In many cases, it can even be more accurate than the logistics providers themselves.

(Image source: https://www.aftership.com/edd)

Humans are not good at processing massive amounts of data, and traditional software cannot generate intelligent suggestions, which is exactly what AI excels at. We can combine AI with SaaS to provide better services to customers by processing and analyzing data.

In my opinion, AI and SaaS do have a lot of room for development in many specific business scenarios.

Because SaaS is essentially about solving a specific problem, and AI at its current stage is also good at solving specific problems.

But this does not mean that we have to find ways to incorporate AI into every business scenario.

# Which scenarios are not suitable for using AI

First, the field of creative content creation, such as columns, movies and music.

When ChatGPT first came out, someone asked me what I thought of articles generated using ChatGPT.

My point is:

AI-generated articles lack value. I would rather spend days writing a high-quality article to help others than spend a few minutes generating a thousand articles of average quality to improve SEO (Search Engine Optimization).

I have a principle when writing articles, which is:

I must share some valuable content that no one else has said.

If there is already a lot of content on a related topic online, and I don’t have unique insights and practical experience on this, then I will definitely not write a related article just to catch the attention.

However, AI currently requires a large amount of similar content input in order to generate new content.

At this stage, AI is not good at creating new ideas and concepts, especially those related to human feelings and thinking.

And even if AI can generate new ideas and concepts in the future, I don’t think that content should be created entirely by AI, because whether it is articles, movies or music, it will have an impact on a person’s values. In this regard, I still insist that content created by real people will be more appropriate.

AI can generate a large amount of content and publish it on the Internet, but the large quantity does not mean that the values ​​are correct. If readers mistakenly believe that these contents are created and published by humans, and thus think that most people on the Internet think so, then this will definitely affect the readers' values.

So I think we can use AI as an auxiliary tool for content creation, use it to collect information and improve efficiency, but we can never use it to replace content creation itself.

Stack Overflow also prohibits the sharing of answers generated by ChatGPT within the community, because although the code answers currently generated by AI technology look good, they have not actually been verified in detail and the accuracy rate is not high, which is misleading for readers who want to find the correct answer [2].

Second, areas that may cause harm to people.

Because AI lacks moral values ​​(at least at this stage), it cannot judge whether something will cause harm to people, so we should not use AI to build services in areas that may cause harm to people.

For example, there have been cases in the past where a photo recognition system mistook a human face for an animal, which is undoubtedly harmful to the person involved [3].

For example, after ChatGPT was launched, some people found that they could use ChatGPT to obtain detailed steps on “how to murder someone” and “how to break into a house and steal”[4], which would undoubtedly cause harm to people.

So in my opinion, we should not ignore the harm that AI may cause just because it is so powerful and easy to use. On the contrary, because AI is so powerful, we should use it with caution, especially in areas where it may cause harm to people.

Google has also expressed a similar view in this regard. Because AI may generate some bad content and affect Google's brand, they will not launch AI services similar to ChatGPT at present [5].

# What should we pay attention to when using AI

First, people have the right to know that AI is providing services to them.

With the continuous development of technology, some service AIs may become more and more like real people, but as service providers, we have the responsibility to tell customers that "it is AI that is currently providing services to you."

If the other party can accept it, then they can continue to use our services. If they cannot accept it, they can also choose to refuse our services.

But no matter what, customers should have the right to know how we provide our products and services.

Second, we need to establish mechanisms to supervise and handle some special scenarios.

I believe that most people who use AI to provide products and services have good intentions, but it is inevitable that due to the limitations of technology and the diversity of scenarios, AI may do things beyond our expectations, so we need to establish some mechanisms to regulate and deal with it.

For example, conduct more in-depth testing and review of some special scenarios, and prepare records for abnormal situations, etc.

The “Ethical Guidelines for Trustworthy AI”[6] issued by the European Commission contains a very complete and detailed explanation, which you can refer to for reading.

# Summarize

The full name of SaaS is Software as a Service, but the real focus has never been on the Software in the front, but the Service in the back.

What customers want is business results, not technical tools. Customers don’t care whether AI is used behind the service that brings results to customers.

So for those specific repetitive procedural tasks and tasks that require processing massive amounts of data, we can use AI to provide better services to our customers.

But AI is not omnipotent, and we should not add AI to every SaaS as a gimmick to tell stories to customers and investors.

Especially in the field of creative content creation and areas that may cause harm to people, we need to use AI with caution.

At the same time, when using AI to provide services, we need to pay attention to:

1. People have the right to know that AI is providing services to them;

2. We need to establish mechanisms to supervise and handle some special scenarios.

If this article is helpful to you, please like and forward it so that more people can see it. If you have different opinions, you are also welcome to discuss with me in the comment section.

Reference Links:

[1] https://www.smartkarrot.com/resources/blog/top-ai-companies/

[2] https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned

[3] https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/?sh=69c40cd5713d

[4] https://twitter.com/davisblalock/status/1602600453555961856

[5] https://www.cnbc.com/2022/12/13/google-execs-warn-of-reputational-risk-with-chatgbt-like-tool.html

[6] https://www.secrss.com/articles/10224

<<:  How to get the first paying customers for an international SaaS product?

>>:  GPT-5 is coming? OpenAI's latest large model is exposed!

Recommend

After the “tide recedes”, where will live show broadcasts go?

The platforms change, but the anchors remain. What...

Xiaohongshu "Star" Rise

In the live streaming boom, how can each platform ...

Just now! The official account can modify pictures!

Public account articles can now modify images, whi...

The first batch of people who made money with AIGC have already started overseas

The wind of AI writing applications first blew AI ...

How to get accurate leads through content (Part 2)

How to accurately obtain clues for content product...

Recognizing user value and transactions

Introduction: The title of this article is concise...

Taiwan's top ten copywriting quotes, revealed~

This article selects the top ten copywriting phras...

How to cancel Amazon Global Selling? How to close it?

Among cross-border e-commerce platforms, Amazon an...

Exploring the blue ocean of male emotional content in the “his economy”

This article analyzes the reasons why male emotion...

4 product selection methods, tell you how to choose good products!

To judge whether a project is successful, product ...