Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
Cyber executive fraud scams have been rampant for years. These scams trick an employee into transferring large sums of money into the fraudster's bank account. In the past, these often involved using a high-level executives hacked email account (or an email appearing to be from them) to request the employee to quickly and secretly transfer money for a "special project" that no one else should know about. They play on an employee's desire to please the requesting executive and their unique position to quickly do so. It used to be the average value of these were around US$100,000. But they have been steadily growing more sophisticated and costly, often involving the hackers doing a detailed inspection of the executive's email to identify information to make the request sound more believable (such as to determine current projects, confirm when the executive is likely to be unavailable for a call, and even to be able to craft the email to sound more like the executive).
Recently, the risk grew far greater as it was reported a deep fake videocall showing an AI-generated multi-national company's CFO and other co-workers were used to convince a HK branch employee to make 15 transfers totaling HK$200M (approximately US$25M) into 5 local HK bank accounts. Reports indicate that the initial email request seemed suspicious to the employee, but then she was invited to a videochat, purportedly over a common personal communications app where the deep fake of the CFO, and apparently or other employees, were used to instruct her to make the transfers. The deep fakes were apparently AI-generated videos created from past videochat recordings obtained from the individuals. From the reports, the deepfakes were more like a recording and would not be able to interact and respond to questions, and may have had somewhat limited head movement. It would appear that at least one of the hackers was a live participant orchestrating things so, after allowing the HK employee to introduce herself, the deepfake images informed her to make the transfers. It was only after the 15 transfers were made that the employee contacted their UK headquarters, only to be informed there was no such instruction.
Gen-AI also seems involved in other incidents where deep fake images of individuals contact their loved ones to request "urgent funds," including indicating they have been kidnapped or otherwise in dire need. Further, we are increasingly seeing deep fake images of celebrities and even public officials. Moreover, it appears that hackers are using AI to sift large digital data to identify more convincing approaches for their scams as well as weaknesses in software coding or network security.
ENJOY UNLIMITED ACCESS TO THE SINGLE SOURCE OF OBJECTIVE LEGAL ANALYSIS, PRACTICAL INSIGHTS, AND NEWS IN ENTERTAINMENT LAW.
Already a have an account? Sign In Now Log In Now
For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473
What Law Firms Need to Know Before Trusting AI Systems with Confidential Information In a profession where confidentiality is paramount, failing to address AI security concerns could have disastrous consequences. It is vital that law firms and those in related industries ask the right questions about AI security to protect their clients and their reputation.
Most of the federal circuit courts that have addressed what qualifies either as a "compilation" or as a single creative work apply an "independent economic value" analysis that looks at the market worth of the single creation as of the time when an infringement occurs. But in a recent ruling of first impression, the Fifth Circuit rejected the "independent economic value" test in determining which individual sound recordings are eligible for their own statutory awards and which are part of compilation.
Practical strategies to explore doing business with friends and social contacts in a way that respects relationships and maximizes opportunities.
Regardless of how a company proceeds with identifying AI governance challenges, and folds appropriate mitigation solution into a risk management framework, it is critical to begin with an AI governance program.
As the relationship between in-house and outside counsel continues to evolve, lawyers must continue to foster a client-first mindset, offer business-focused solutions, and embrace technology that helps deliver work faster and more efficiently.