Last week, I explained why, in my first few weeks working in deep-tech VC, I decided to scan the Australian AI ecosystem and talk to more than 50 founders building inspiring and impactful companies in the AI space.
If you have not done so, have a read here. In today’s instalment, I will run through some of the key learnings I made about AI start-ups throughout this process and also provide some insight into what I am now looking for.
#1 — Few companies have secured the data they need to win
Perhaps the most common occurrence when meeting very early-stage founders was that they had brilliant ideas (think big impact, big market, big potential) but for multiple reasons (e.g. working in relatively unexplored or niche areas where publicly available data would be of no use; desired customer is the one that has the data; data exists but is highly confidential; a hardware device needs to be built to collect the data) they did not have the data necessary to begin building.
Many of the AI companies I met were vertical AI companies that are attempting to use AI to solve a very specific problem in highly optimised ways. For those companies, not only is proprietary data a massive value-add, but the data access problem is usually amplified since they are working in very specific application/industries.
Although I still view data access as the biggest obstacle for early-stage AI founders, I learned that there are many effective approaches to dealing with this data stand-still situation until you get the data kick-start you need.
From building the base platform that still offers value to customers and then using this platform as a data collection mechanism, to crowdsourcing data collection, to starting with dummy datasets (which you can generate with AI), to finding a first-believer customer who is ready to give you their data, to manually collecting the data yourself. There too were founders with no plan as to how they will go about collecting the necessary data. Here, the rule of thumb is that no plan is a bad plan.
#2 — Many companies say they use ‘AI’ but few have done so meaningfully
If you in any way monitor the start-up world you know that there are a fair share of companies ending in ‘.ai’. Given the plethora of tools available to developers, you can easily run some machine learning algorithms on your datasets, generate some ‘insights’ and then slap terms like ‘AI-engine’ on your company description.
Expectantly, there is often debate about what is considered sufficient use of AI — when is a start-up ‘AI enough’? Generally, if you have a good product that is doing good things, this is a relatively moot discussion — go ahead and buy that ‘.ai’ domain!
It becomes slightly more important when working in deep-tech investment and/or when intellectual property is of significance. My scan through the AI ecosystem has given me a rough (working) guide on whether AI sits at the heart of a company’s tech-stack/innovation. For some AI companies, their innovation is largely centred around the models they develop. For others, the innovation lies in building a proprietary dataset. Often, it’s both.
If, however, a company is used pre-existing non-complex models (think linear regression) and has not collected a proprietary dataset (this, of course, is still a valid way to do things!), the deep-techiness/AI-ness line may possibly not be passed.
#3 — Product and customer are more important than ever
In my first few months in VC, my inner nerd was so excited to see AI in action so it was quite easy to forget the very basic principle of start-ups. So, as simple as this sounds, another learning I made was that regardless of whether you have developed the world’s coolest generative AI model, the basic conditions of a good start-up matter most.
For example, you need customers who will want and love your product. Your AI-driven tech-stack also needs to sit underneath a product that is useful and sticky (especially for the intended user — e.g. do not make the product highly technical when the intention is to develop a product for non-AI-experts).
Always ask ‘what are the features that will really address my customer pain-points’ as opposed to ‘what are the features that I really want to build’. For AI start-ups, it is also very important to consider how you will develop a robust data-ingestion pipeline before you build an aesthetic GUI/dashboard.
Thus, for an early-stage start-up, caring about your pipeline, product and customer in addition to more technical goals like building the best model are the early signs of success.
#4 — Bigger picture questions about AI matter
Well before looking at AI from a deep-tech investment lens, I have always been intrigued about the future of AI. Largely because there is perhaps no other enabling ‘technology’ that has sparked so much varied and polarised philosophical and academic discussion about both risks (think algorithmic bias, data privacy issues) and opportunity (improving quality of care in sectors like healthcare, automating repetitive tasks to enable people to focus on more fulfilling parts of their lives and roles).
Understanding this bigger picture is an important tool, even for a task like trying to find the next big AI company because a founder with a knack for success should have enough foresight, nuanced understanding of the space and long-term vision to proactively understand where their company sits with respect to these questions.
Are you contemplating data privacy and de-identification?
Are you considering whether your product is trained on enough data points to avoid making harmfully inaccurate and biased predictions?
Are you treating your AI models like a closed loop or integrating feedback from user interactions?
Are you contemplating the consequences of poor explainability for deep-learning models and what that means for user trust?
These are hard questions to ask, especially as a busy founder who is nailing product-market-fit, pitching to investors, speaking to customers and, well, *insert 1000 other tasks*. But building AI responsibly is worthwhile.
ChatGPTs insane success, for example, is largely attributed to OpenAI’s realisation that they needed human-feedback (when fine-tuning their GPT-3 model with reinforcement learning) to make a chatbot that could actually be useful and more human-like — this required an understanding of more high-level questions about how humans interact with their model.
#5 — Diversity matters and is lacking for both founders and AI talent
If you read my previous article, you would know that one of my intentions when searching the AI landscape was to ensure that I could also meet with diverse and more incognito/’low-key’ founders. It was a successful search — I met so many amazing and diverse founders working on incredible ideas.
However, I noticed that I could not find as many diverse female-identifying AI founders as I would have liked.
I then embarked on a journey of figuring out why that was the case and met up with a researcher from UMelb who was exploring the somewhat unrelated topic of attrition rates of diverse peoples in the field of STEM. She pointed out that attrition actually begins when they have to take part in work experience at university because they realise that the culture in STEM-related workplaces can often not feel inclusive and welcoming.
This allowed me to infer two key reflections about the state of diversity with respect to the AI space.
Firstly, just as the STEM workplace can be an unwelcoming bubble for those who are diverse, the broader AI start-up world can be like this too.
This can turn diverse peoples away from the founder journey, even though they are sitting on fantastic ideas and have the potential to be awesome founders. The notion that the start-up world can feel like an non-inclusive bubble is perhaps made true when seeing how investors tend to favour women working on ‘nice’/social-impact ideas (as this fits within the ‘nurturing’ mother stereotype) (source) but otherwise women are hit with hard-hitting ‘prevention-oriented’ questions (source).
The good news is that there are a number of awesome organisations like Women in AI and investors who are becoming aware of such biases and being attentive to the questions they ask founders, helping pave the way for a new generation of AI start-ups run by brilliant and diverse people.
Secondly, it is pivotal that AI founders actively care about diversity in their hiring practices and when building their workplace culture. Hiring diverse technical talent (not just an ‘Office Mum’) and building an inclusive culture not only will help reduce the discussed attrition, allowing for more diverse talent to trickle through, but is pivotal for building a good AI product.
#6 — Vertical applications of AI have dominated the space but horizontal AI is making a come back
A large percentage of AI companies focus on specific vertical applications of AI. This makes sense when considering that next to data processing, defining the problem scope (e.g. what is my input and output; which AI models make the most sense for the problem I am trying to solve) is a very arduous (and technical!) process when building an AI tool.
However, with more and more businesses wishing to automate their specific processes by leveraging the powers of AI, horizontal do-it-yourself AI platforms might just be cool again.
Most horizontal AI platforms have been built to help data scientists or AI developers do their role more efficiently.
But what about businesses who do not have the budget for a data scientist and have no idea how to process their data, what model to choose and what sort of inferences they can make with their data?
I am now looking for a startup that can address this gap.