Welcome to I was there when, a new oral history project of the In machines that we trust Podcast. It contains stories of breakthroughs in artificial intelligence and computing as told by the people who witnessed them. In this first episode, we meet Joseph Atick, who helped develop the first commercially viable facial recognition system.
Credits:
This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with the help of Lindsay Muscato. Edited by Michael Reilly and Mat Honan. Mixed by Garret Lang, with sound design and music by Jacob Gorski.
Complete transcript:
[TR ID]
Jennifer: I’m Jennifer Strong, host of In machines that we trust.
I want to tell you about something that we worked on behind the scenes for a while.
It’s called I was there when.
It’s an oral history project that tells the stories of breakthroughs in artificial intelligence and computing … as told by the people who witnessed them.
Joseph Atick: And when I walked into the room, it saw my face, pulled it out of the background and said, “I see Joseph” and that was the moment when the hair on my back … I had the feeling that something was happening is. We were witnesses.
Jennifer: We start things with a man who helped create the first commercially viable facial recognition system … back in the 90s …
[IMWT ID]
I am Joseph Atick. Today I am Executive Chairman of ID for Africa, a humanitarian organization that focuses on giving people in Africa a digital identity so that they can access services and exercise their rights. But I haven’t always been in the humanitarian field. After completing my doctorate in mathematics, I and my colleagues made some fundamental breakthroughs that led to the first commercially viable face recognition. That’s why I’m known as the founding father of facial recognition and the biometric industry. The algorithm of how a human brain recognizes familiar faces came to me when we did some research, math research, while I was at the Institute for Advanced Study at Princeton. But it was far from having any idea how to do something like this.
It’s been a long period of months of programming and failure and programming and failure. And one night, early in the morning, we had just finished a version of the algorithm. We submitted the source code for compilation to get execution code. And we got out, I got out to go to the washroom. And then when I stepped back into the room and the source code had been compiled by the machine and returned. And usually after you compile it it runs automatically and when I walked into the room it saw a human walk into the room and it discovered my face, extracted it from the background and said, “I see Joseph. ” and that was the moment when the hair on my back – I felt like something had happened. We were witnesses. And I started calling the other people who were still in the lab, and every one of them came into the room.
And it would say, “I see Norman. I would see Paul, I would see Joseph. ”And we took turns walking around the room just to see how many could spot it in the room. It was a moment of truth where I would say that several years of work eventually led to a breakthrough, although theoretically no additional breakthrough was required. Just the fact that we figured out how to implement it, and finally saw that skill in action, has been very, very rewarding and satisfying. We had developed a team that was more of a development team than a research team focused on putting all of these skills into one PC platform. And that was the hour of birth, really the hour of birth of commercial facial recognition, I would say in 1994.
My worry started very quickly. I saw a future where there was no hiding place, with the spread of cameras everywhere and the commercialization of computers and the increasingly better processing abilities of computers. So in 1998 I lobbied industry and said we had to put together principles for responsible use. And I felt fine for a while because I felt like we got it right. I felt like we put in place a responsible usage code that every implementation follows. However, this code has not stood the test of time. And that’s because we didn’t foresee the advent of social media. When we introduced the code in 1998, we basically said the most important element of a facial recognition system was the tagged database of known people. We said if I’m not in the database the system will be blind.
And it was difficult to build the database. We could build at most a thousand 10,000, 15,000, 20,000 because every image had to be scanned and entered by hand – in the world we live in today we are now in a regime where we let the beast out of the bag by feeding it billions of faces and helping it by tagging ourselves. Um, we are now in a world where it is difficult to control the use of facial recognition and to require everyone to be accountable. And at the same time there is no shortage of familiar faces on the Internet, because you can just scratch, as has recently been the case with some companies. And so I started panicking in 2011 and wrote a comment that it is time to hit the panic button because the world is going in a direction where face recognition will be ubiquitous and faces will be available everywhere in databases.
And back then people said I was an alarmist, but now they realize that this is exactly what is happening today. And where do we go from here? I was lobbying for laws. I campaigned for a legal framework that would make it an obligation for you to use someone’s face without their consent. So it’s no longer a technological problem. We cannot contain this powerful technology by technical means. There has to be a legal framework. We can’t let technology go too far ahead of us. Before our values, before what we consider acceptable.
The issue of consent remains one of the most difficult and challenging topics when it comes to technology. Termination alone is not enough. I have to be given my consent. You need to understand the consequences of what it means. And not just to say, well, we signed up and that was enough. We told people and if they didn’t want to they could have gone anywhere.
And I also find that it’s so easy to be seduced by flashy technological features that could give us a short-term advantage in our lives. And then we realize all along the line that we have given up something that was too precious. And by that point we have desensitized the population and have come to a point where we cannot withdraw. That’s what I’m worried about. I am concerned about the fact that face recognition is through the work of Facebook and Apple and others. I am not saying that everything is illegitimate. Much is legitimate.
We have come to a point where the general public may become blasé and desensitized for seeing it everywhere. And maybe in 20 years you will be leaving your house. No longer will you have the expectation that it won’t be you. It is not recognized by dozens of people you cross along the way. I think at this point the public will be very alarmed because the media will start covering cases of people being persecuted. People have been targeted, people have even been selected and kidnapped on the streets according to their assets. I think that’s a lot of responsibility on our hands.
So I think the issue of consent will continue to preoccupy the industry. And until this question is a result, it may not be resolved. I think we have to put limits on what can be done with this technology.
My career has also taught me that being too much ahead is not a good thing because facial recognition as we know it today was invented in 1994. But most people think that it was invented by Facebook and the machine learning algorithms that are now spread all over the world. Basically, at some point I had to step down as public CEO because I was limiting the use of technology that my company wanted to promote for fear of negative consequences for humanity. I therefore believe that scientists must have the courage to project into the future and see the consequences of their work. I’m not saying they should stop making breakthroughs. No, you should go full steam ahead, get more breakthroughs, but we should also be honest with ourselves and generally make the world and policy makers aware that this breakthrough has its advantages and disadvantages. Hence, in using this technology, we need some sort of guidance and framework to ensure that it is channeled for positive rather than negative application.
Jennifer: I was there when … is an oral history project that shows the stories of people who experienced or created breakthroughs in artificial intelligence and computing.
Do you have a story to tell? Do you know anyone who does that? Write us an email at podcasts@technologyreview.com.
[MIDROLL]
[CREDITS]
Jennifer: This episode was recorded in New York City in December 2020 and produced by me with help from Anthony Green and Emma Cillekens. We are edited by Michael Reilly and Mat Honan. Our mix engineer is Garret Lang … with sound design and music by Jacob Gorski.
Thanks for listening, I’m Jennifer Strong.
[TR ID]