A U.S. intelligence official raised concerns that AI ChatGPT, which can converse or write like a human through messenger, could be abused for phishing attacks that steal personal information through e-mail.
Rob Joyce, head of the National Security Agency’s (NSA) cybersecurity department, responded to the moderator’s comment that there were concerns that people would be more susceptible to phishing messages written by ChatGPT at a talk at the Center for Strategic and International Studies (CSIS) on the 11th, saying, “I totally think so too.”.
“ChatGPT technology is impressive and sophisticated,” said Joyce. While you can’t automate hacking attacks with ChatGPT or tell it to find all vulnerabilities in a particular piece of software, “it will optimize the flow of that work.”
He said that foreign malicious actors will use ChatGPT to write very reliable native English sentences and use them for phishing attacks or contacting the target. “That will be a problem,” he said.
He went on to warn that “Chat GPT will not become a ‘super AI hacker’ replacing hackers in the near term, but hackers who use AI will be more effective than those who do not.”
When asked if he was concerned about the possibility of China and others hacking AI companies such as OpenAI, which created ChatGPT, he replied that he could not mention specific examples, but said, “All technological advances that have been game changers in our industry have been targeted.”.
Regarding the point that there is no evidence that the Chinese video platform TikTok is acting maliciously, he compared TikTok to a “Trojan Horse” and asked, “Is there a need to stand in front of an enemy with a loaded gun?” did.
He said the U.S. had overestimated Russia’s conventional capabilities and underestimated its cyber capabilities.
For example, he explained that Russia is hacking web and security cameras to monitor roads, cars, and trains in Ukraine.
