The possibilities of artificial intelligence are endless, but what’s the point if the social impact isn’t positive?
In northern Canada, translator apps are helping researchers preserve a threatened Inuit language and connecting the remote communities that still speak it.
In London, developers are working to make object recognition more personal for blind and low-vision individuals, a critical step in including the users of the technology in collecting the data that creates it, and improving their access to the world.
At the Metropolitan Museum of Art in New York, cognitive search functions are being used to tag and classify artworks in more detail than ever before in order to make the collection accessible, in a meaningful way, to people who may never set foot inside.
Scientists at the CSIRO in Australia are reducing plastic waste flowing into the ocean by using object recognition on river bridges and sensors in stormwater drains to identify, quantify and remove rubbish before it reaches the sea.
The common denominators in these initiatives is the fact they are powered by AI and supported by Microsoft Azure’s cloud technology, with funds also provided by Microsoft.
What fuels innovation in the startup space is wanting to help resolve issues or challenges; AI is commonly the pathway to this resolution but it comes with considerations.
Lee Hickin is National Chief Technology Officer (CTO) at Microsoft Australia and country lead for Microsoft’s Office of Responsible AI. As such he spends a lot of time focussing on the ethical implications of AI. He likens his position to standing on the edge of a precipice: incredibly exhilarating and terrifying at the same time.
When people express concern about AI’s potential negative consequences, Hickin is quick to point out that AI is simply a tool.
“It’s about how you use it. The same technology can do both good and bad things,” Hickin tells Startup Daily.
Startups are increasingly looking to apply a lens of social responsibility over how they use AI within their products and services, and Hickin has plenty of valuable advice.
Is AI the right solution?
One of the first questions tech companies should ask themselves, he says, is whether they actually need to use AI to solve the problem. Rather than assuming that AI will solve everything, Hickin advocates diving deeper into the problem that your tool or your tech is seeking to address.
Focus on the problem
He recommends that startups focus on interrogating the problem they’re trying to solve rather than just the technical solution they are building.
“I find that most people with brilliant ideas get very focused on the solution to the problem, the idea they have,” he says. “They don’t always see the bigger picture, and the importance of having conversations with people who are impacted by your tool, people that might want to use it, people that you need to engage to make your tool work.”
Inclusivity is key
Hickin stresses the most important thing for tech companies is to focus on inclusivity: “At Microsoft, what we have found as we’ve tried to meld together our responsible ethics approach to AI, and our inclusive approach to technology (that meet everyone’s needs), is that if you build and design for the most challenged user of your technology, it will lead you down a path of actually building the most responsible and ethical solution.”
Build a culture of questioning
As startups build their teams and their business, Hickin also advocates building a culture that is really open to being questioned.
“If you’re not in a position where your team is able to answer difficult questions about impact, about transparency, about security, about privacy, about inclusion, about diversity, you’re always going to be hamstrung on being positive in your output.”
Invest in your ethics
Most startups build a great product, says Hickin, but they don’t always think about what that product represents in terms of the responsibility it takes for its purpose. “Invest as much in your ethics and your responsibility mindset – your vision, if you like – as you do in your technology.”
Lee Hickin’s four strategies for AI for social impact
Hickin says there are four significant strategies that startup founders and entrepreneurs should adopt if they are serious about implementing AI for positive social impact:
1. Set your standard and stick to it
Develop a clear point of view on your ethics and how you’re going to uphold them. Don’t just say it. Do it.
2. Communicate why your innovation is important
Talk often, clearly, and with purpose on your ethics as much as you talk about your innovation. So don’t just tell the market what amazing product you’ve got, tell them why it’s important that product exists, and how it’s going to impact society. Be very clear on that.
3. Be open to change
Solicit as much external input as you can, be open to advice that might even change the direction of your product. Never assume you have it right.
4. Responsible AI is never done
Hickin says that the most important thing founders can do is to consider responsible AI as an ongoing process of governance. Never assume it’s done. Responsible AI is not a checkbox on the road to release. It’s your licence to operate. Treat it as such.
At the core of what Microsoft does is bringing human talent and ideas together with innovative cloud technologies to make a difference.
Technology doesn’t change the world, people do, but as the innovators who are preserving the cultural heritage of a disappearing Inuit language, or saving the oceans, one plastic bag at a time, AI can help.
Are you inventing with purpose? Power your startup with Microsoft’s secure, future-ready Azure cloud solutions.
This article is brought to you by Startup Daily in partnership with Microsoft.
Feature image: AdobeStock