fbpx
AI/Machine Learning

Impact and Generative AI offers great opportunities, but we also need to manage risk

- June 21, 2023 5 MIN READ
AI
Photo: AdobeStock
In the final week of March 2023, the Future of Life Institute made headlines with its open letter, signed by some of the biggest names in tech, calling on all artificial intelligence (AI) labs to “immediately pause the training of AI systems more powerful than GPT-4”.

It cited the need to allow safety research and policy to catch up with the “profound risks to society and humanity” created by the rapid advancement in AI capabilities. 

In the two months since, we’ve seen commentary from all sides about the runaway progress of the AI Arms Race and what needs to be done about it.

Sundar Pichai, CEO of Google and Alphabet, has recently said that “building AI responsibility is the only race that really matters, a mere few months after declaring a ‘code red’ in response to the success of Open AI’s ChatGPT.

Governments are also on notice, with Members of the European Parliament having reached agreement on the EU’s flagship AI Act, and the US government investing US$140m into pursuing AI advancements that are “ethical, trustworthy, responsible and serve the public good”. 

The key question remains: how should we be thinking about balancing the dangers against the opportunities arising from the mainstreaming of (generative) AI? 

What is AI? 

AI is a series of parts – including sensors, data, algorithms and actuators, operating in many different ways and with different purposes. AI is also a sociotechnical idea – a technical tool attempting to automate certain functions, but always based in maths. Generative AI is just one form of AI. 

The case for a new paradigm of AI risk analysis 

I recently spoke with Dr Kobi Leins, a global expert in AI, international law and governance, about how we should conceptualise this delicate balance.

Dr Leins stressed the need for increasing the depth of our risk-analysis lens and actively considering the long-term, interconnected societal risks of AI-related harm, as well as embracing potential benefits. She highlighted not only the dangers of prioritising speed over safety, but also urged a cautious approach to seeking ways to use the technologies, rather than starting with the business problems and using the toolbox of technologies available. Some tools are cheaper and less risky, and may solve the problem without the (virtually) rocket-fuelled solution. 

So what does this look like? 

Known unknowns vs unknown unknowns

It’s important to remember that the world has seen this magnitude of risk before. Echoing a quote reputed to be by Mark Twain, Dr Leins told me that “history never repeats itself, but it does often rhyme.” 

Many similar examples of scientific failures causing immense harm exist, where benefits could have been gained and risks averted. One such cautionary tale lies in Thomas Midgley Jnr’s invention of chlorofluorocarbons and leaded gasoline – two of history’s most destructive technological innovations. 

As Stephen Johnson’s account in the NY Times highlights, Midgley’s inventions revolutionised the fields of refrigeration and automobile efficiency respectively and were lauded as some of the greatest advancements of the early twentieth century.

However, the passing of the next 50 years and the development of new measurement technology revealed that they were to have disastrous effects on the long-term future of our planet – namely, causing the hole in the ozone layer and widespread lead poisoning. Another well-known example is Einstein, who died having contributed to creating a tool that was used to harm so many. 

The lesson here is clear. Scientific advancements that seem like great ideas at the time and are solving very real problems can turn out to create even more damaging outcomes in the long term. We already know that generative AI creates significant carbon emissions and uses significant amounts of water, and that broader societal issues such as misinformation and disinformation are cause for concern. 

The catch is that, as was the case with chlorofluorocarbons, the long-term harms of AI, including generative AI, will very likely only be fully understood over time, and alongside other issues, such as privacy, cybersecurity, human rights compliance and risk management. 

The case for extending the depth of our lens 

While we can’t yet predict with any accuracy the future technological developments that might unearth the harms we are creating now, Dr Leins emphasised that we should still be significantly extending our time frame, and breadth of vision, for risk analysis.

She highlighted the need for a risk framing approach focused on ‘what can go wrong’, as she discusses briefly in this episode of the AI Australia Podcast, and suggests that the safest threshold should be disproving harm. 

We discussed three areas in which directors and decision-makers in tech companies dealing with generative AI should be thinking about their approach to risk management. 

  1. Considering longer timelines and use cases affecting minoritised groups 

Dr Leins contends that we are currently seeing very siloed analyses of risk in commercial contexts, in that decision-makers within tech companies or startups often only consider risk as it applies to their product or their designated application of it, or the impact on people who look like them or have the same amount of knowledge and power.

Instead, companies need to remember that generative AI tools don’t operate in isolation, and consider the externalities created by such tools when used in conjunction with other systems. What will happen when the system is used for an unintended application (because this will happen), and how does the whole system fit together? How do these systems impact the already minoritised or vulnerable, even with ethical and representative data sets? 

Important work is already being done by governments and policymakers globally in this space, including in the development of the ISO/IEC 42001 standard for AI, designed to ensure implementation of circular processes of establishing, implementing, maintaining and continually improving AI after a tool has been built.

While top-down governance will play a huge role in the way forward, the onus also sits with companies to be much better at considering and mitigating these risks themselves.

Outsourcing risk to third parties or automated systems will not only not be an option, but it may cause further risks that businesses are not contemplating yet beyond third party risk, supply chain risks and SaaS risks. 

  1. Thinking about the right solutions 

Companies should also be asking themselves what their actual goals are and what the right tools to fix that problem really look like, and then pick the option that carries the least risk. Dr Leins suggested that AI is not the solution to every problem, and therefore shouldn’t always 

be used as the starting point for product development. Leaders should be more discerning in considering whether it’s worth taking on the risks in the circumstances.

Start from a problem statement, look at the toolbox of technologies, and decide from there, rather than trying to assign technologies to a problem. 

There is a lot of hype at the moment, but there will also be increasingly apparent risk. Come quick to adopt generative AI have already stopped using it – because it didn’t work, because it absorbed intellectual property, or because it completely fabricated content indiscernible from fact. 

  1. Cultural change within organisations 

Companies are often run by generalists, with input from specialists. Dr Leins told me that there’s currently a cultural piece missing that needs to change – when the AI and ethics specialists ring the alarm bells, the generalists need to stop and listen. Diversity on teams and having different perspectives is also very important, and although many aspects of AI are currently already governed, gaps remain. 

We can take a lesson here from the Japanese manufacturing maintenance principle called ‘andon’, where every member of the assembly line is viewed as an expert in their field and has the power to full on the ‘andon’ cord to stop the line if they spot something they perceive to be a threat to production quality.

If someone anywhere in a business identifies an issue with an AI tool or system, management should stop, listen, and take it very seriously. A culture of safety is key. 

Closing thoughts

Founders and startups should be listening out for opportunities with AI and automation, but also keep a healthy cynicism about some of the ‘magical solutions’ being touted. This includes boards establishing a risk appetite that is reflected in internal frameworks, policies and risk management, but also in a culture of curiosity and humility to flag concerns and risk. 

We’re not saying it should all be doom and gloom, because there’s undoubtedly a lot to be excited about in the AI space.

However, we’re keen to see the conversation continue to evolve to ensure we don’t repeat the mistakes of the past, and that any new tools support the values of environmentally sustainable and equitable outcomes.