Like the generator The AI boom continues, with startups building business models around the technology beginning to define themselves along two clear lines.
Some, convinced that a proprietary, closed approach will give them an advantage over swarms of competitors, choose to keep their AI models and infrastructure in-house, hidden from public view. Others open source their models, methods, and datasets, embarking on a more community-led growth path.
Is there a good choice? Maybe not. But every investor seems to have an opinion.
Dave Munichiello, general partner at GV, an investment arm of Alphabet, argues that open source AI innovation can foster a sense of trust among customers through transparency. In contrast, closed models – while potentially more efficient, given teams’ lighter documentation and publishing workload – are inherently less explainable and therefore harder to sell to “boards and executives”, says -he.
Ganesh Bell, managing director of Insight Partners, broadly agrees with Munichiello’s view. But he says open source projects are often less mature than their cloud-based counterparts, with front-ends that are “less consistent” and “more difficult to maintain and integrate.”
Depending on who you ask, the choice of development direction (closed source or open source) matters less to startups than the overall go-to-market strategy, at least in the early stages.
Christian Noske, partner at NGP Capital, says startups should focus more on applying the results of their models, open source or not, to “business logic” and, ultimately, proving a return on investment for their customers.
But many clients don’t care about the underlying model or whether it’s open source, says Ian Lane, partner at Cambridge Innovation Capital. They’re looking for ways to solve a business problem, and startups that recognize this will have a head start in the crowded AI field.
Now what about the regulations? Could this affect how startups grow and scale their businesses and even how they release their models and supporting tools? Maybe.
Noske believes that regulation could increase the costs of the product development cycle, strengthening the position of large technology companies and incumbents at the expense of smaller AI providers. But he says more regulation is needed, particularly policies that emphasize the “clear” and “responsible” use of data in AI, labor market considerations and the many ways AI can be used as a weapon.
Bell, on the other hand, sees regulation as a potentially lucrative market. Companies that create tools and frameworks to help AI providers comply with regulations could score a windfall — and, in doing so, “help build trust in AI technologies,” he says.
Open source versus closed source, business model and regulation are just a few topics covered here. Interviewees also discussed the pros and cons of transitioning from an open source to a closed company, the possible security benefits and dangers of open source development, and the risks associated with relying on open source models. API-based AI.
Read on to hear:
Dave Munichiellogeneral partner, GV
Christian Noskepartner, Capital NGP
Ganesh Bellgeneral manager, Insight Partners
Ian Lanepartner, Cambridge Innovation Capital
Ting-Ting Liuinvestor, Prosus Ventures
Answers have been edited for length and clarity.
Dave Munichiello, General Partner, GV
What are the main advantages of open source AI models over their closed competitors? Do the same tradeoffs apply to UI elements like AI front-ends?
Public innovation (via open source) creates a dynamic in which developers feel that the models they deploy have been deeply evaluated by others, surveyed by the community, and that the organizations behind them are ready to link their reputation to quality. of the model.
R&D from universities and businesses has been the source of AI innovation over the past decades. The OS community and OS-related products strive to engage this essential part of the ecosystem whose incentives vary from businesses to profit-seeking.
Closed models can be more efficient (perhaps have a technical lead of 12 to 18 months?) but will be less explainable. Other boards and executives will trust them less unless they are strongly backed by a branded technology company willing to put its brand on the line to certify quality.
Is open source potentially dangerous depending on the type of AI in question? The ways in which stablecasting has been abused come to mind.
Yes, everything can be potentially dangerous if used and deployed in an unsafe manner. Long-tail operating system models may, in a rush to market, face less scrutiny than closed-source competitors whose bar for quality and security must be set higher. As such, I would differentiate highly used and popular OS models from long tail OS models.