AI is introducing radical innovation even in the way we think about business, and the aim of this section is indeed to categorize different AI companies and business models.
It is possible to look at the AI sector as really similar in terms of business models to the biopharma industry: expensive and long R&D; long investment cycle; low-probability enormous returns; concentration of funding toward specific phases of development. There are anyway two differences between those two fields: the experimentation phase, that is much faster and painless for AI, and the (absent) patenting period, which forces AI to continuously evolve and to use alternative revenue models (e.g., freemium model).
II. The DeepMind Strategy and the Open Source Model
If we look from the incumbents’ side, we might notice two different nuances in their business models evolution. First, the growth model is changing. Instead of competing with emerging startups, the biggest incumbents are pursuing an aggressive acquisition strategy.
I named this new expansion strategy the “DeepMind strategy” because it has become extremely common after the acquisition of DeepMind operated by Google.
The companies are purchased when they are still early stage, in their first 1–3 years of life, where the focus is more on people and pure technological advancements rather than revenues (AI is the only sector in which the pure team value exceeds the business one). They maintain elements of their original brand and retain the entire existing team (“acqui-hire”). Companies maintain full independence, both physically speaking — often they keep in place their original headquarters — as well as operationally. This independence is so vast to allow them to pursue acquisition strategies in turn (DeepMind bought Dark Blue Labs and Vision Factory in 2014). The parent company uses the subsidiary services and integrates rather than replaces the existing business (e.g., Google Brain and Deepmind).
It seems then that the acquisition costs are much lower than the opportunity cost of leaving around many brains, and it works better to (over)pay for a company today instead of being cutting out a few years later. In this sense, these acquisitions are pure real option tools: they represent future possible revenues and future possible underlying layers where incumbents might end up building on top of.
The second nuance to point out is the emerging of the open source model in the AI sector, which is quite difficult to reconcile with the traditional SaaS model. Many of the cutting-edge technologies and algorithms are indeed provided for free and can be easily downloaded. So why incumbents are paying huge money and startups are working so hard to give all away for free?
Well, there are a series of considerations to be made here. First, AI companies and departments are driven by scientists and academics, and their mindset encourages sharing and publicly presenting their findings. Second, open sourcing raises the bar of the current state of the art for potential competitors in the field: if it is publicly noted what you can build with TensorFlow, another company that wants to take over Google should publicly prove to provide at least what TensorFlow allows. It also fosters use cases that were not envisioned at all by the providing company and set up those tools as underlying technology everything should be built on top of which.
III. Implications of Open-Source
Releasing free software that does not require presence of high-tech hardware is also a great way for 6 things to happen, as discussed below.
- Lowering the adoption barrier to entry, and get traction on products that would not have it otherwise.
- Troubleshooting, because many heads are more efficient in finding and fixing bugs as well as looking at things from a different perspective.
- (Crowd) validating, because often the mechanism, rationales, and implications might not be completely clear.
- Shortening the product cycle, because from the moment a technical paper is published or a software release it takes weeks to have augmentations of that product.
- To create a competitive advantage in data creation/collection, in attracting talents, and creating additive products based on that underlying technology.
- More importantly, to create a data network effect, i.e., a situation in which more (final or intermediate) users create more data using the software, which in turn make the algorithms smarter, then the product better, and eventually attract more users.
These are the many reasons why this model is working, even though there are advocates who claim incumbents to not really be maximally open (Bostrom, 2016) and to only release technology somehow old to them. My personal view is that companies are getting the best out of spreading their technologies around without paying any costs and any counter effect: they still have unique large datasets, platform, and huge investments capacity that would allow only them to scale up.
Regardless the real reasons behind this strategy, the effect of this business model on the AI development is controversial. According to Bostrom (2016), in the short term a higher openness could increase the diffusion of AI. Software and knowledge are non-rival goods, and this would enable more people to use, build on top on previous applications and technologies at a low marginal cost, and fix bugs. There would be strong brand implications for companies too.
On the long term, though, we might observe less incentive to invest in research and development, because of free riding. Hence, there should exist a way to earn monopoly rents from ideas individuals generate. On other side, what stands on the positive side is that open research is implemented to build absorptive capacity (i.e., is a mean of building skills and keeping up with state of art); it might bring to extra profit from owning complementary assets whose value is increased by new technologies or ideas; and finally, it is going to be fostered by individuals who want to demonstrate their skills, build their reputation, and eventually increase their market value.
Although these notes on the effect of open research on AI advancements in short versus long term, it is not clear where this innovation will be promoted. We are looking at the transition from universities, where historically innovation and research lie, to the industry. This is not a new concept, but it is really emphasized in AI context. It has been created a vicious circle, in which universities lost faculty and researchers to the benefit of private companies because they can offer a combination of higher salary, more interesting problems, relevant large unique datasets, and virtually infinite resources. This does not allow universities to train the next generation of PhD students that would be in charge of fostering the research one step ahead. The policy suggestion is then to fund pure research institutes (e.g., OpenAI) or even research-oriented companies (as for instance Numenta) to not lose the invaluable contribution that pure research has given to the field.
Most of the considerations made so far were either general or specific to big players, but we did not focus on different startup business models. An early stage company has to face a variety of challenges to succeed, and usually, they might be financial challenges, commercial problems, or operational issues.
AI sector is very specific with respect to each of them: from a financial point of view, the main problem regards the absence of several specialized investors that could really increase the value of a company with more than mere money. The commercial issues concern instead the difficulties in identifying target customers and trying head around the open source model. The products are highly new and not always understood, and there might be more profitable ways to release them.
Finally, the operational issues are slightly more cumbersome: as abovementioned, large dataset and consistent upfront investments are essential and might be detrimental to a shorter-term monetization strategy. A solution to the data problem may be found in the “data trap” strategy, that in venture capitalist Matt Turck’s words consists of offering (often for free) products that can initialize a data network effect. In addition, the user experience and the design are becoming tangibly relevant for AI, and this creates friction in early stage companies with limited resources to be allocated between engineers, business, and design.
Bostrom, N. (2016). “Strategic Implications of Openness in AI Development”. Working paper.