My chilling run-in with secretive facial-recognition app Clearview AI | #childpredator | #onlinepredator | #sextrafficing

[ad_1]

Then one day I logged in to Facebook to discover a message from a ‘friend’ named Keith. I didn’t remember him, but he mentioned that we had met a decade earlier at a gala for Italian Americans. Back then, I’d been more cavalier about my privacy and said yes to just about any ‘friend request’. 

‘I understand you are looking to connect with Clearview,’ Keith wrote. ‘I know the company, they are great. How can I be of help?’ Keith worked at a real estate company in New York and had no obvious ties to the facial recognition start-up. I had many questions – foremost among them being how he knew I was looking into the company and whether he knew the identity of the technological mastermind who had supposedly built the futuristic app – so I asked him for his phone number. He didn’t respond. I asked again two days later. Silence. 

As it became clear that the company wasn’t going to talk to me either, I tried a new approach: find out whether the tool was as effective as advertised. I recruited a detective based in Texas who was willing to help look into Clearview, as long as I didn’t reveal his name. He went to Clearview’s website and requested access. Unlike me, he got a response within half an hour with instructions on how to create an account for his free trial. All he needed was a police department email address. 

He ran a few photos of criminal suspects whose identities he already knew, and Clearview nailed them all, linking to photos of the correct people on the web. Then he ran his own image through the app. He had purposely kept photos of himself off the internet for years, so he was shocked when he got a hit: a photo of him in uniform, his face tiny and out of focus. It was cropped from a larger photo, for which there was a link that took him to Twitter (now X). A year earlier, someone had tweeted a photo from a Pride festival. The Texas investigator had been on patrol at the event, and he appeared in the background of someone else’s photo. When he zoomed in, his name badge was legible. 

He was shocked that a face-searching algorithm this powerful existed. It had, for example, potentially horrible implications for undercover officers if the technology became publicly available. I told him that I hadn’t been able to get a demo yet and that another officer had run my photo but had got no results. He ran my photo again and confirmed that there were no matches. 

Minutes later, he says his phone rang. It was a number he didn’t recognise, with a Virginia area code. He picked up. ‘Hello. This is Marko with Clearview AI tech support,’ said the voice on the other end of the call. ‘We have some questions. Why are you uploading a New York Times reporter’s photo?’ 

‘I did?’ my associate responded cagily. ‘Yes, you ran this Kashmir Hill lady from The New York Times,’ said Marko. ‘Do you know her?’ ‘I’m in Texas,’ the police officer replied. ‘How would I know her?’ 

The company representative said it was ‘a policy violation’ to run photos of reporters in the app and deactivated the account. The detective helping me was taken aback, creeped out that his use of the app was being that closely monitored. 

He called immediately to tell me what had happened. A chill ran through me. It was a shocking demonstration of just how much power this mysterious company wielded. The people who control a technology that becomes widely used hold great power over our society. But who were the people behind Clearview?

Hoan Ton-That is a Vietnamese-Australian, in his mid-30s, 6ft 1in tall, with long, silky black hair and androgynous good looks. He dresses in paisley-print shirts and suits in a rainbow of colours made bespoke in Vietnam, where, his father told him, his ancestors had once been royalty.

[ad_2]

Source link

——————————————————–


Click Here For The Original Source.

National Cyber Security

FREE
VIEW