Deepfake attacks can easily trick facial recognition • The Register

In short Criminals can easily steal someone else’s identity by tricking live facial recognition software using deepfakes, according to a new report.

Sensity AI, a startup focused on fighting identity fraud, has carried out a series of fake attacks. Engineers scanned a person’s image from an ID card and mapped their likeness onto another person’s face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the alleged attacker is a genuine user.

So-called “liveness tests” attempt to authenticate identities in real time, relying on images or video feeds from cameras like facial recognition used to unlock mobile phones, for example. Nine out of ten providers have failed Sensity’s live deepfake attacks.

Sensity did not name companies that may be victims of counterfeit attacks. “We told them ‘listen, you’re vulnerable to this type of attack’ and they said ‘we don’t care,'” Francesco Cavalli, Sensity’s chief operating officer, told The Verge. “We decided to release it because we believe that at a corporate level and in general, the public should be aware of these threats.”

Liveness tests are risky, especially if banks or the US tax authorities, for example, use them for automated biometric authentication. These attacks, however, are not always easy to pull off. Sensity mentioned needing a specialized phone to hijack mobile cameras and inject pre-made deepfake models into its report.

PyTorch developers will soon be able to train AI models on their own Apple laptops

Newer versions of Apple’s computers contain custom GPUs, but the PyTorch developers weren’t able to harness the power of the hardware when training machine learning models.

This will however change with the next release of PyTorch v1.12. “Together with Apple’s Metal Engineering team, we’re excited to announce support for GPU-accelerated PyTorch training on Mac,” the PyTorch community announced in a blog post this week.

“Until now, PyTorch training on Mac has only used the CPU, but with the upcoming release of PyTorch v1.12, developers and researchers can take advantage of Apple’s silicon GPUs for model training a lot faster.” The new release means Mac users will be able to train neural networks on their own devices without having to pay to rent compute resources through cloud computing services.

The newest PyTorch v1.12 is expected to be released “sometime in the second half of June,” a spokesperson said. The register.

Apple’s GPUs are more optimized for training machine learning models than its processors, which makes it easier to train larger models faster.

False data for medical models

US health insurance provider Anthem is working with Google Cloud to build a synthetic data pipeline for machine learning models.

Up to two petabytes of fake data, mimicking medical records and healthcare claims, will be generated by people at the chocolate factory. These synthetic datasets will be used to train AI algorithms that can better detect cases of fraud and pose fewer security risks than collecting real data from patients.

The models will eventually analyze real data and could, for example, look for fraudulent claims made by people by automatically checking their health records. “Increasingly…synthetic data is going to overtake and be how people use AI in the future,” said Anil Bhatt, chief information officer of Anthem. the wall street journal.

Using fake data avoids privacy issues and could also reduce bias. But these artificial samples don’t always work in all machine learning applications, experts said. The register.

“Synthetic data models, we believe, will ultimately fuel the promise of what big data can deliver,” said Chris Sakalosky, general manager, US Healthcare & Life Science at Google Cloud. “We think that’s actually what will drive this industry forward.”

Ex-Apple AI director leaves for DeepMind

A former director of machine learning at Apple, who reportedly quit due to the company’s back-to-work policy, is moving to work at DeepMind.

Ian Goodfellow led iGiant’s secret “Special Projects Group”, helping to develop its self-driving car software. It was previously reported that he left after Apple asked employees to return to the office three days a week from May 23. The policy has now been delayed due to an increase in COVID cases.

He will then join DeepMind, according to Bloomberg. Interestingly, Goodfellow is reportedly employed as an “individual contributor” by the UK-based research lab. He is best known for inventing generative adversarial networks, a type of neural network often used to produce AI-generated images, and for helping author the popular Deep Learning textbook published in 2015.

Goodfellow was a director at Apple for more than three years and previously held artificial intelligence research roles at Google and OpenAI. ®

About Matthew Berkey

Check Also

Super Eagles ready to face off in San Jose

After more than 30 hours of travel from Nigeria, the Super Eagles The delegation is …