Don't believe everything you see on social media

Oman Saturday 18/March/2023 23:15 PM
By: Times News Service
Don't believe everything you see on social media

Muscat: The recent video clip of a ‘cheetah’ sprinting along a car on the Sultan Qaboos street and racing past people on Muttrah Corniche had the social media abuzz with various theories.

A couple of hours after the video went viral, Times of Oman clarified to its readers that there was no cheetah prowling the streets of Muscat.

Rather, it was an advertising campaign where Artificial Intelligence (AI) and creativity were used smartly to run an advertising campaign.

In this particular example, deepfake certainly amazed us, but it also raises concerns about whether we can be sure what’s real. Over the recent years, technology like AI and deepfaking has advanced to the point where it’s becoming quite difficult to see the flaws in these creations.

There was a time when it was obvious that what we were looking at wasn’t real, but we’ve seen increasingly convincing deepfake examples in recent times.

It is not just the deepfakes that are a cause of concern but the growing threat of malicious content on social media that has led experts to call for more caution while watching and sharing information.

Yet another recent example that went viral in Oman, but later found out to be untrue was a stunt scene of an Indian movie being shot in Muscat. The stunt scene was enacted with all concerned authorities notified and no harm happened to the stuntman, videos of the stunt spread like wildfire with concerns about the safety of the driver as cars went flying at a high speed.

Times of Oman confirmed that the stuntman was safe. These two examples are only a tip of the iceberg.

The growing menace of fake videos, false information and spread of malicious content is a cause for worry and the Royal Oman Police (ROP) regularly sends out warning for people to be alert.

A police official said: “The Royal Oman Police always calls upon everyone not to broadcast or circulate fake and false news, videos or pictures that disturb public peace and tranquility.”

“The Royal Oman Police also draws everyone’s attention to verify any information, whether local or international, in any of the social media, and calls for not circulating false news and verifying its authenticity before publishing it.”

Mohammed Al Balushi, an IT expert based in Muscat, said: “In the recent times, the tools of the fabricated video industry have developed amazingly, to a degree that threatens individual privacy and sometimes threatens to ignite political crises between countries and at the level of individuals and groups. These fabricated videos spread on websites and social media are not only limited to political and social contents, but also include uncensored stuff to attract high views. We need to be careful and not believe whatever we hear or see.”

Moath Al Saeedi, a social media activist said: “Deepfakes now include both visual and audio contents. The use of artificial intelligence (AI) is no longer limited to research or useful purposes but sadly, used to cheat and forge people. It is necessary for people to be cautious and double-check the authenticity of the clip or message.”

Al Balushi added that there are ways to check the deepfakes and it is important that people learn to spot the difference between a real and a fake story on social media.

Penalty for spreading rumours
There is no specific text in the Omani law criminalising spreading rumours and lies. Rather, this act is considered as a breach of public order, as stated in the text of Article (19) of the Law on Combating Information Technology Crimes, which stipulates: Whoever uses the information network or information technology means in producing, publishing, distributing, purchasing or possessing, shall be punished by imprisonment for a period not less than a month and not exceeding three years and a fine of not less than OMR1,000 and not more than OMR3,000, or one of these two penalties against anyone who uses the information network or information technology means to produce, publish, distribute, purchase or possess anything that may or involves prejudice to religious values or public order.

Public order means: a set of security rules and means that preserve security, tranquility and tranquility and regulate the social life of the citizen and resident on the land of the Sultanate of Oman, and spreading rumours and false news affecting the security and economic stability of the country; therefore, the above article is applied as a punishment that deters the convict from committing such a crime again.

Who has the right to initiate this lawsuit?
The crime referred to in Article (19) of the Law on Combating Information Technology Crimes is considered a public lawsuit, and it is an inherent jurisdiction of the Public Prosecution.

The law obligates anyone who witnesses or learns of the occurrence of a crime to notify the competent authorities. Whenever the crime reaches the knowledge of the judicial police officer, he becomes obligated to notify the Public Prosecution, which has a duty to investigate the crimes in question and decide whether to file them or refer them to court.

What is a deepfake?
Deepfakes take their name from the fact that they use deep learning technology to create fake videos. In simple words, the 21st century’s answer to Photoshopping is deepfake.

Deep learning technology is a kind of machine learning that applies neural net simulation to massive data sets. Artificial intelligence (AI) effectively learns what a particular face looks like at different angles in order to transpose the face onto a target as if it were a mask.
Huge advances came through the use of generative adversarial networks (GANS) to pit two AI algorithms against each other, one creating the fake and the other grading its efforts, teaching the synthesis engine to make better fakes.

Accessible software tools such as FakeApp (opens in new tab) and DeepFaceLab (opens in new tab) have since made deepfake technology available to all. The technology has now more often been used for sinister ends with a desert of information where disinformation can thrive.
The more insidious impact of deepfakes, along with other synthetic media and fake news, is to create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events.

They also pose a personal security risk: deepfakes can mimic biometric data, and can potentially trick systems that rely on face, voice, vein or gait recognition. The potential for scams is clear.

Phone someone out of the blue and they are unlikely to transfer money to an unknown bank account. But what if your “mother” or “sister” sets up a video call on WhatsApp and makes the same request?

While AI improves to find ways to counter deepfakes, one of the safest options is to stay on guard.