I agree with you on your first question. That politicians should actually define the guard rails, but I don’t think they have to do that for everything. I think we have to choose the areas that are the most sensitive. The EU has labeled them as high risk. And maybe we could take some models from that to help us think about what is high risk and where we should spend more time and maybe policy makers, where should we spend time together?
I’m a big fan of regulatory sandboxes when it comes to feedback co-design and co-evolution. Uh, I have an article in an Oxford University press book about an incentive-based rating system that I could talk about in a moment. But on the other hand, I also think that you all have to reckon with your reputational risk.
As we move into a much more digitally advanced society, developers must also do their due diligence. You cannot afford as a company to publish an algorithm that you think is the best idea or an autonomous system that you think is the best idea and then end up on the front page of the newspaper. Because that affects the trustworthiness of your product by your consumers.
And what I’m saying, you know, both sides, is that I think it’s worth a conversation where we have certain guardrails when it comes to facial recognition technology because we don’t have the technical rigor when it applies to all demographics . When it comes to different impacts on financial products and services. There are great models that I have found in my work in the banking industry where they actually have triggers because they have regulators to help them understand which proxies actually have different effects. There are areas where we’ve just seen this in the housing and appraisal market, where AI is being used to replace subjective decision-making, but contributes more to the type of discrimination and predatory appraisals we’re seeing. There are specific cases where we actually need policymakers to impose guard rails, but most importantly, to be proactive. I tell policymakers all the time that you can’t blame data scientists. When the dates are terrible.
Anthony Green: Right.
Nicole Turner-Lee: Put more money into research and development. Help us create better data sets that are over-represented in certain areas or under-represented in relation to minorities. The main thing is that it has to work together. I don’t think we’re going to have a good solution unless policymakers actually lead this or data scientists lead it themselves in certain areas. I think you really need people working together and working together to figure out what those principles are. We create these models. computers don’t. We know what we’re doing with these models when we’re building algorithms or autonomous systems or ad targeting. We know! We in this room cannot sit back and say we don’t understand why we are using these technologies. We know because they actually have a precedent for how they were spread in our society, but we need some accountability. And that’s really what I’m getting at. Who holds us responsible for these systems that we create?
It’s so interesting, Anthony, that many of us have been watching the, uh, conflict in Ukraine for the last few, uh, weeks. My daughter, because I’m 15 years old, came up to me with a variety of TikToks and other things that she’s seen to say, “Hey mom, did you know this happens?” And I kind of had to back off, because I really got into the conversation without knowing it in any way once I walked that path with her. I’m going deeper and deeper and deeper into this well.
Anthony Green: Yes.