On a regular Monday morning, you got a video call from a friend. You friend looked so distressed, flustered, and you got really concerned and asked him what happened. Then he’d tell you his 5-year old kid was hit by a car and sent into the ICU. Your friend said he hasn’t got enough money to pay for the treatment. Now he’s crying, and asking if you could lend him money to save his poor son. What would you do?
“Of course I will give him the money to save the boy. He’s my friend and he needs my help.”
If that’s what you’re thinking, think again.
Because it could be a scam. The person you saw “flesh and blood” may not be who you think he is. Even though “your friend” was speaking to you on face-time.
Police in the northern Chinese city of Baotou have uncovered a deepfake fraud in which a man was scammed out of 4.3 million yuan, the most so far stolen in China in this way.
According to disclosures by police in Fuzhou in eastern Fujian province, on April 20, a fraudster stole an individual’s WeChat account and used it to make a video call to a businessman named Guo, an existing contact on the individual’s WeChat app.
The con artist asked for Guo's personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.
Without checking that he had received the money, Guo sent two payments from his company account totaling the amount requested, the police said.
"At the time, I verified the face and voice of the person video-calling me, so I let down my guard," the article quoted Guo as saying. But an impersonator had used face-swap and voice-mimicking artificial intelligence technologies.
The man only realized his mistake after messaging the friend whose identity had been stolen, who had no knowledge of the transaction. Guo alerted local police.
At the request of the Fuzhou authorities, their colleagues in Baotou later intercepted some of the funds at a local bank, but nearly 1 million yuan was unrecoverable. The police’s investigations are ongoing.
The case unleashed discussion on microblogging site Weibo about the threat to online privacy and security, with the hashtag "#AI scams are exploding across the country" gaining more than 180 million views on Tuesday, but it was seemingly removed from the internet amid fears that the case may inspire copycat crimes.
"If photos, voices and videos all can be utilized by scammers," one user wrote, "can information security rules keep up with these people's techniques?”
A number of similar frauds have occurred around China as AI technology becomes more and more widely applied. The public security departments of Shanghai and Zhejiang province previously disclosed such cases.
In addition to direct scams, some frauds involve live e-commerce platforms where AI technology is used to replace the faces of live streamers, with those of stars and celebrities, to take advantage of their market appeal and fool people into buying goods, raising related issues around fraud and intellectual property rights.
An illegal industrial chain has formed with people using face-swapping technology for online scams, according to a report by China News Service's financial channel.
A website providing deepfake software services sells a complete set of models that can be used on various live-streaming platforms for only 35,000 yuan, according to its customer service.
Deepfake technology, which has progressed steadily for nearly a decade, has the ability to create talking digital puppets. The software can also create characters out of whole cloth, going beyond traditional editing software and expensive special effects tools used by Hollywood, blurring the line between fact and fiction to an extraordinary degree.
Deepfake has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.
Another example of how eerily accurate their technology is their replication of actor Leonardo Di Caprio speaking at the United Nations.
According to the World Economic Forum (WEF), deepfake videos are increasing at an annual rate of 900%, and recent technological advances have even made it easier to produce them.
Identifying disinformation will only become more difficult, as deepfake technology will become sophisticated enough to build a Hollywood film on a laptop without the need for anything else.
In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.
Many people are starting to get concerned of deepfake and AI-generated content being used for evil. First, this technology can be used for misinformation; for instance, allowing people to believe a politician made a shocking statement that they never did. Or secondly, to scam people, especially the elderly.
In China, AI companies have been developing deepfake tools for more than five years. In a 2017 publicity stunt at a conference, the Chinese speech-recognition specialist iFlytek made deepfake video of the U.S. president at the time, Donald J. Trump, speaking in Mandarin.
But even the AI pioneer now has fallen victim of the same technology it’s been developing. Shares of iFlytek fell 4.26% on Wednesday after a viral screenshot surfaced of what appeared to be a chatbot-generated article that made unsubstantiated claims against the company, fueling public concern about the potential misuse of generative AI.
The potential pitfalls of groundbreaking AI technology have received heightened attention since US-based company OpenAI in November launched ChatGPT.
China has announced ambitious plans to become a global AI leader by 2030, and a slew of tech firms including Baidu, Alibaba, JD.com, NetEase and ByteDance have rushed to develop similar products.
ChatGPT is unavailable in China, but the American software is acquiring a base of Chinese users who use virtual private networks to gain access to it for writing essays and cramming for exams.
But it is also being used for more nefarious purposes.
This month police in the northwestern province of Gansu said "coercive measures" had been taken against a man who used ChatGPT to create a fake news article about a deadly bus crash that was spread widely on social media.
China has been tightening scrutiny of such technology and apps amid a rise in AI-driven fraud, mainly involving the manipulation of voice and facial data, and adopted new rules in January to legally protect victims.
And a draft law proposed in mid-April by China's internet regulator would require all new AI products to undergo a "security assessment" before being released to the public. Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.
The global buzz surrounding the launch of ChatGPT has seen a spate of AI-related product launches in China. However, the Fuzhou fraud case has combined with other high profile deepfake incidents to remind people of the potential downsides to such advances in artificial intelligence.
AI regulation is still a developing subject in China. Initial excitement around the potential of ChatGPT and similar AI products in China has given way to concerns over how AI could be used to supercharge criminal activity.
Tech talk aside. For ordinary people, what can we do to avoid being victim of AI-related scams?
First of all, experts suggest you should be mindful of unexpected and urgent calls asking for money from your loved ones or your work. For example, try to ask them some personal questions to verify their identity.
Second, try using a different source, a different channel. Make up an excuse and say you have to call them back. Then, call back on what you know to be the person's number.
And if you receive such video calls, try to detect if they’re fake through unnatural facial features and expressions. For one thing, unnatural eye movement, and lack of blinking are clear signs of deep fakes. Replicating natural eye movement through body language is harder for deepfake tools. Also, the lighting and the facial features of the image or the video such as the hair and teeth may seem to be mismatched. In the most obvious giveaways are misaligned facial expressions, and sloppy lip to voice synchronizations, unnatural body shapes, and awkward head and body positions.
除此之外，4月11日，我国也出台了第一份对生成式 AI 进行监管的文件，国家互联网信息办公室正式发布《生成式人工智能服务管理办法（征求意见稿）》，更是对此场景给出了极有针对性的预设规范——一些主播AI换脸女星带货，不再是“法无禁止即可为”。
Executive Editor: Sonia YU
Editor: LI Yanxia
Host: Stephanie LI
Writer: Stephanie LI
Sound Editor: Stephanie LI
Graphic Designer: ZHENG Wenjing, LIAO Yuanni
Produced by 21st Century Business Herald Dept. of Overseas News.
Presented by SFC