According to Mr. Vaishnaw, the government has recently served notices to concerned companies over the issue of deepfakes, and the platforms may respond but have to act more aggressively towards it.
Mr. Vaishnaw noted, “Maybe three to four days later, we will contact them for brainstorming and ensure that platforms do enough for prevention, and cleaning,” while on the sides of a conference.
The minister said that both Meta and Google will also attend the meeting. Another important thing that Mr. Vaishnaw conveyed that ‘safe harbour immunity’ provided under the Information Technology (IT) Act will not apply, unless platforms promptly take strong steps.
Technological protection measure – Shields intermediaries of user generated content posted online against liability.
In the recent past, ‘deepfake’ clips about prominent stars that became viral caused agitation and questions as to unlawful manipulation of technology and technologies employed for doctoring material and false stories.
The prime minister, Narendra Modi, on Friday warned that the rise of deep fakes created through the aid of artificial intelligence will be a large crisis that can ignite unrest among individuals, stating that the media should alert the public on its misuse and educate about it.
DeepFakes are synthetically manufactured media that have been manipulated artificially through AI technology in order to realistically fabricate a forgery that misrepresents or replaces someone.
The danger of deepfakes:
Recently, a deepfake video showing actor Rashmika’s morphed face has circulated in the social media causing controversy and requesting regulations of technology to prevent abuse.
It was said that the original video was that of a British-Indian “influencer” whose face had been superimposed with Madonna’s face. Similarly, some actors have been doctored and have appeared in supposed videos that have circulated on different social media platforms.
Last week, the centre directed major social media companies to identify misinformation, deepfakes and others that contravene with their own rules and delete such content within a prescribed time of 36 hours after being reported to them.
The minister of state for Electronics and IT – Rajeev Chandrashekar had said that deepfakes hurt women more than anyone else.
The first priority of the Narendra Modi Government remains as Safety and trust of our Digital Nagriks.
On another question about apple’s threat notification, Mr. Vaishnaw said that Apple is investigating, and so is CERT-In government’s cyber security agency. “Apple is investigating. So are CERT-In. I hope, soon enough, we will get some results,” Mr. Vaishnaw remarked.
A little over two weeks ago some of the opposition leaders argued that they were alerted by Apple Inc. stating that there were state-sponsored attackers attempting to remotely compromise on their iPhones as well as accused to have been hacked by the state which Mr. Vaishnaw
The Union Minister considered this as a new threat to the society, which required “immediate steps” be taken toward combating it. Therefore, thay have all agreed that within 10 days, they will create simple workable items supported by the four pillars in an organised fashion. He further added to say “We must concentrate ourselves on four areas – detection, prevention, a mechanism for reporting of deepfake, and creating awareness.”
Mr. Viashnaw added saying, “The next meeting with stakeholders will be held in the first week of December to follow up on the decisions that were made today”.
This follows Mr. Vaishnaw’s announcement on November 18 that the government has sent notices to the firms about the deepfake problem. He admitted how responsive they were but emphasised that they should do all in their power to fight such content.
They are doing something but it may not be enough. Also, we expect that there will be a conference for all the platforms shortly. “Three or four days later, we will invite them to share ideas with us on that and ensure that platforms are making enough efforts to curb it (deep fakes),” he said, Mr. Vaishnaw.
More importantly, Mr Vaishnav stressed that even though platforms have the so-called safe-harbour immunity under the IT Act, this provision will only apply if they take quick and significant actions.
Following a trend where various ‘deepfake’ movies made with artificial intelligence viraled on social media, there is concern about potential misuse and safety related issues surrounding deep fake content.