KUALA LUMPUR, Dec 6 — The police have expressed concern over the growing use of artificial intelligence (AI) by criminal groups for malicious purposes in the form of deep fakes, voice spoofing, and financial market manipulation.
In a report by The Star today, Federal Commercial Crime Investigation Department (CCID) director Comm Datuk Seri Ramli Mohamed Yoosuf said a recent deepfake video featuring a Malaysian leader purportedly endorsing a get-rich-quick scheme was a clear example of such AI manipulation.
“The promises made in that video are too good to be true, which means it is most definitely an investment scam and there is no way that a political leader would promote such a thing. It is absurd.
“While the video here is about gaining quick wealth, there are other aspects that can similarly be manipulated, especially in politics and social engineering.
“AI is already here and because of this, it is important not only for the police but also the public to be aware of the potential risks AI could pose in the near future.
“Today’s world has shown how AI is increasingly taking over tasks and roles previously done by humans.
“While some of us might know about the advancements in this field, others might still be unaware of the potential risks that follow the swift advancements made in AI,” he was quoted as saying.
Ramli also said the police anticipate that syndicates will employ AI in their unlawful activities targeting Malaysians as early as the middle of 2024.
“Once this occurs, everything in the financial sector, every service that has gone online, could face the risk of being infiltrated.
“AI could be used in the creation of algorithms that are capable of hacking computer systems, while other algorithms could also be used to analyse data and manipulate results which gives it the potential to be used to influence or cripple financial markets,” he told The Star.
He further highlighted that AI could play a role in sophisticated video and audio manipulation, posing risks such as potential identity theft and the production of deepfake videos.
“In this scenario, the possibilities are limitless as crime syndicates could use deepfake images, videos or voices to dupe people and organisations.
“They could use such deepfakes in bogus kidnap-for-ransom cases, where they trick families into believing they have kidnapped a loved one, while some could even use it to create lewd or pornographic images of victims that could in turn be used to blackmail them,” he was quoted as saying by the news outlet.
He also said by generating persuasive false identities using photographs or videos, criminal syndicates might impersonate individuals, soliciting money or deceiving victims into believing that a family member is in danger.
This, Ramli said, could also be employed to disseminate propaganda and false information which has the potential to fuel public anxiety.
“There are indicators that AI could be used to perpetrate economic crimes.
“AI scientists are also talking about quantum computing that will enable decryption, which in turn could render all binary encryption technology that is currently in place useless.
“We have been keeping up with the latest news on the use of AI in crime in the region and are aware of an instance in Hong Kong earlier this year where a syndicate allegedly used AI deepfake technology in the application of loans,” he told The Star.
Ramli went on to say that although there have been no reported commercial cases involving the direct application of AI thus far, he does not discount the possibility of it becoming a significant issue in the future.
“This is why the public needs to be prepared and be in the know of such things.
“The best weapons the public will have against AI are knowledge and awareness.
“If the public in general are aware of how AI can be used, they will be extra cautious and not be easily duped by syndicates employing such tactics,” he was quoted as saying.
On August 25, Hong Kong Free Press reported that police in the Special Administrative Region had exposed a syndicate employing eight stolen identity cards to submit 90 loan applications and register 54 bank accounts.
In a ground-breaking case for the region, deepfake techniques were employed at least 20 times to mimic the individuals depicted in the identity cards, deceiving facial recognition programs.
It was reported that six individuals were arrested in connection with the fraudulent activities.