The European Union as a catalyst for European technology development

The contribution of the U.S. and China to emerging technologies in Europe is essential, whether through the infrastructure, products and solutions they deliver, or the financing they provide. While there is no question of European Union’s countries doing without them in the near future, this growing external influence – which is showing no signs of fading away in some areas (search engines, online marketplaces, social networks, cloud infrastructure…) – is likely to result in a structural dependency that will be tricky to disentangle from.

Taking better control of tech development in Europe is not only an opportunity for the European Union (E.U.) to foster economic growth, but it is also a matter of sovereignty for all its member states. By looking to better protecting European values and establishing more balanced partnerships with foreign superpowers, the E.U. will strengthen its influence both on European policies and international relations.  

In early Feb.22, as part of the “Scale-up Europe” (1) conference and the French Presidency of the European Union, some bold decisions were taken in the fields of (i) venture capital financing, (ii) talent mobility and (iii) deep tech, with a view to facilitating the emergence of at least ten big tech European champions valued at more than €100bn by 2030.   

Although the European Union can sometimes be portrayed as an impediment to economic and tech development, especially as the effects of larger internal markets in the U.S. and China are not as impactful within the scope of the single European market, we will show that (and how) the E.U. can become a powerful catalyst for European tech by (i) combating “fragmentation”, (ii) building trust and restoring competition and (iii) expanding financing.

I. Combating fragmentation

One of the reasons put forward to explain the significant gap between European technology “champions” and their competitors relates to the fragmentation of the European market, reflected in regulatory discrepancies, differences in business and consumption cultures or mobility hindrances for talents. In these areas, the U.S. and China have a clear comparative advantage in terms of scaling potential when fostering the emergence of international leaders such as GAFAM or BATX (2). Obviously, a large internal market is not the only factor that accounts for this success.

Developing a genuine single market has actually been at the heart of the European project since decades. Still, it is key for the E.U. to continue in that direction, and technology is another opportunity to accelerate the path of uniformization.  

Illustration 1: Allocation of Gross Domestic Product (GDP) between the E.U., U.S. and China (2020 figures stem from the International Monetary Fund).

A. Regulation harmonization

Though regulatory convergence between most of European countries has been underway since the 1950’s, there appears to be much room for further improvements. As expansion into other European countries is becoming a more sought-after objective for start-ups, different rules among member states are considered the main regulatory hurdle within Europe, according to a survey conducted by the VC fund Atomico in its “State of European Tech 2021” report. These differences are reflected in many fields, such as in tax and social rules, but also in the way business is done in some sectors. The complexity they introduce delays the expansion process and leads to additional costs, thus limiting economies of scale. In many cases, it proved easier and more profitable for a European-based start-up to expand internationally, say into a larger market such as the U.S., than in some other E.U. countries. When they are not suffering from divergent rules, E.U. start-ups are burdened with “over-regulation”, resulting in the same sort of hindrances.

The E.U. and its member states should pursue convergence and simplification within its member states, with a view to fostering further innovation and facilitating the emergence of European “big tech” companies. However, this convergence should not be pursued unreservedly, as it could upset the balance with some delicate national and local objectives. In that regard, the complexity of the E.U.’s work rightly lies in identifying the actionable levers that are essential to creating a more favorable environment for European tech companies to thrive, both in the short- and longer term. It is unlikely that the E.U. will create a unified, efficient single market comparable to the U.S. in the coming years, if only as far as tech is concerned. Still, the E.U. must keep contributing to the convergence of best practices that could be identified through the diversity of its member states and to use the motive of tech sovereignty to further strengthen European ties between member states.            

B. Cultural diversity

With its 27 countries imbued with diverse cultures, the E.U. is fertile ground for diversity. However, the contribution of this diversity to economic prosperity is questionable and could give rise to numerous debates. In this article, our objective is simply to underscore a few patterns relevant to our discussion and examine their potential underlying impacts. With respect to technology development in Europe, diversity mainly stems from business/work behaviours, consumption habits or languages. We do not consider business and consumption practices to be a significant hurdle to a more integrated market. Or, to put it differently, we think that to some extent the complexity and difficulties arising from such differences can be partly offset by the creativity that such situations may generate. Also, tech ecosystems across Europe seem to share some common modern values and to be relatively aligned regarding working practices, driven by flexibility, agility, creativity and a sense of purpose.

With at least 24 languages, the linguistic hurdle is more significant, though it is partly mitigated by the share of the English language as a common interface. English is relatively well-spoken in the tech ecosystem – probably better than in many other ecosystems. The success of U.S. start-ups, combined with the historical importance of English on a global scale, may have contributed to this situation. Nevertheless, it inevitably slows down some processes and interactions. Still, we think the impact of this hurdle is likely to decline in the years ahead, as the level of English among E.U. member states converges to the level of Nordics countries especially.

All in all, we do not think cultural diversity constitutes the most significant barrier to tech development in Europe, and as interactions between ecosystems intensify, we invite you to see it as an opportunity for greater creativity and thus innovation.   

C. Talent mobility

Tech talent shortages, especially for software developers, could seriously impede growth potential in the months and years ahead. Improved recruitment of workers outside the E.U. and a better allocation of competencies across the E.U. should help to optimize growth. Many European countries already have special programs for tech visas. In France, for instance, the “French Tech Visa” is a simplified procedure to obtain a multi-year residence permit – “Talent Passport” – which is intended for founders (selected by partner incubators and accelerators), employees (within companies recognised as “innovative” by the French Ministry of Economy and Finance) and investors (who wish to settle in France).

In order to reduce the complexity among member states, the ESNA (European Start-up Nations Alliance – an entity launched by the E.U. in Nov.21 with the objective of promoting start-up nations’ standards, which consist of a set of best practices that can be shared among member states) will contribute to better monitoring and harmonisation of immigration practices in the area of start-up mobility, representing a landmark for immigrants planning to come to work in Europe.         

Beyond visa policies, increased structural reforms in the areas of education and research will increase the attractiveness for foreign talents. Here, too, counteracting fragmentation by concentrating the centres of excellence would definitively prove to be a crucial lever.

While the path towards further harmonisation has been set, key steps still need to be taken at the E.U. level to improve market conditions and better protect citizens facing new types of technological threats.

II. Building trust and restoring competition  

Against the backdrop of the growing predominance of – “foreign” – big tech companies (especially the U.S. GAFAM) and increased concerns among citizens about their privacy, the E.U. Commission proposed in 2020 two legislative initiatives in the field of digital services in order to modernise rules that had remained largely unchanged since the early 2000’s (3). The Digital Markets Act (DMA) and Digital Services Act (DSA) (both being referred to as the “DSA package”) projects were adopted by the European Parliament, in early 2022, and are expected to be discussed and finalized with member states by mid-2022.

A. Building trust with the Digital Services Act

The DSA primarily aims to protect consumers / citizens by creating a safer and more open digital space in which the “fundamental rights” are preserved. Although the Act covers a broad category of online services, it is intended to provide a set of rules for online intermediaries and platforms (online marketplaces, social networks, content-sharing platforms, application stores, online travel and accommodation platforms, etc.), especially larger ones, which will be expected to tackle disinformation and online hatred.   

The DSA is thus a fight for more trust in data, and the more European citizens trust data, the more technology and innovation are likely to proliferate.

Complementing the DSA, the DMA appears to be even more explicitly geared to addressing internal market efficiency and the development of European tech.  

B. Restoring competition with the Digital Markets Act

Online actors have provided significant benefits to European consumers and businesses, by making the trade of goods and services more efficient both within the E.U. and with the outside world. Without denying all these benefits, in recent years, the E.U. institutions have intensified their efforts to address competition infringements resulting from some tech monopolies. Utilising the existing framework of European competition laws, the European Commission has conducted several antitrust inquiries against Google in particular, which resulted in significant fines. It is now closely investigating the data collection and advertising processes at Google, Amazon, Facebook and Apple. In addition, the Commission has found that the regulation of digital services largely takes place at the member state’s level. This leads to new barriers in the internal European market, which favours well-established, very large “gatekeeper” platforms, that function as bottlenecks between businesses and consumers for important digital services. By controlling preferential auto-referencing and the unfair use of competitors’ data, the Act aims to establish “asymmetric rules” against these larger actors, so as to encourage the emergence of smaller competitors.

It is not clear whether the DSA package in itself would provide clear benefits for the development of European tech, as the GAFAM might be able to circumvent the measures to a certain extent and their historical contribution cannot be easily replaced. Behind it also lies the the E.U.’s intention to limit the growing influence of foreign privately-led rule-makers, and to combat disinformation. In that regard, and without neglecting the potential net positive impact in terms of harmonisation and market efficiency at the European level, we consider these acts to be primarily politically motivated.

III. Expanding E.U.-backed financing.

In 2021, the U.S. accounted for around half of venture capital investment (with a 1/4 of global GDP) – which is a clear indicator of the scale of the financing for innovation and emerging technologies -, whereas Europe accounted for only 1/5 of it (with 1/5 of global GDP). As we have seen, improving regulation and competition in Europe is likely to foster innovation. In Feb.22, the E.U. took additional measures not only to accelerate and expand the financing of start-ups and scale-ups, but also to mitigate the relative influence of foreign investments.      

A. Additional funding opportunities for deep tech and scale-up.

Launched in Mar.21 with a €10bn budget until 2027, the European Innovation Council (EIC) – under the auspices of the European Commission – aims to contribute to the financing of disruptive innovations and the emergence of European deep tech leaders. In doing so, the objective is to improve the transfer of fundamental knowledge into practical innovations that can be brought to market, while improving the connection between the different European hubs. Part of the EIC budget is earmarked for fundamental research, but the bulk of the financing (2/3) is for the validation of a technology that has already been validated, with a view to helping to commercialise the products. In the latter case, funding is provided through subsidies up to €2.5m or equity investments up to €15m. In that regard, the fund should be seen more as a catalyst to late-stage venture capital funding.

Illustration 2: European Innovation Council funding opportunities.

Following the “Scale-up Europe” conference in Feb.22, several E.U. member states announced the creation of a fund of funds within the framework of the European Investment Fund (EIF). So far, €3.5bn has been injected into the fund (principally backed by France and Germany), but the aim is to raise more than €10bn to facilitate the emergence of private late-stage funds of at least €1bn. By comparison, around €70bn of venture capital was raised by European countries in 2021 (with a significant portion coming from foreign investors) and almost €300bn in the U.S.   

B. Fostering the semiconductor industry

In the aftermath of the economic recovery from the Covid pandemic, a global shortage of semiconductors seriously impeded the growth potential of some industries, including the automotive sector. The shortage once again highlighted the urgent need for European countries to reduce their dependence on Asian countries, especially for components that play a crucial role in our modern economies. As a top priority since 2019, the European Commission finally unveiled in Feb.22 a €43bn investment plan – the Chips Act – to boost research and production capacities in the field of chips.

The plan’s objective is to achieve a 20% market share of semiconductor production by 2030, up from 10% in 2021. As the global market is expected to double by 2030, European production should be multiplied by a factor of four by the plan’s time horizon. In addition to increasing capacities, European countries will focus their efforts on the most cutting-edge chips, fueling the development of an ecosystem of proficient workers, so that Europe can once again become a net exporter.

Such an industry investment plan, tantamount to state-backed subsidies, is far from common within the scope of the European Union and once again illustrates the determination of its member states to reduce their dependence on strategic components and strengthen their position among the international competition.

The worldwide battle for technology supremacy is not over. Altough the U.S. and Chinese predominance are certain, the European Union has clearly demonstrated that there are actionable levers to foster innovation further in Europe and that measures which aim to enhance technology are also designed to establish a genuine sovereignty at the European level.

Notes:

  • (1) The “Scale-up Europe” conference is part of the “Scale-up Europe initiative” launched in Mar.21 by the French government as an extension of the “French Tech” philosophy to Europe. It focuses particularly on themes such as (i) growth funding, (ii) start-up talents, (iii) deep tech and (iv) bridging the gap between start-ups and corporations. At the time, more than 200 start-up founders, CEOs of corporations, investors, representative of public institutions, university leaders and start-up associations fueled a debate that has since been extended to the E.U. level.       
  • (2) GAFAM refers to the U.S.-based Google, Apple, Facebook, Amazon and Microsoft. BATX refers to the China-based Baidu, Alibaba, Tencent and Xiaomi.
  • (3) The “E-Commerce Directive”, which was adopted by the E.U. Parliament in June 2000.

Source of information:

Pricing in SaaS (“software as a service”): How it is evolving to remain an essential contributor to success.

SaaS has progressively become a dominant model for software utilization – increasingly replacing perpetual license / maintenance models – and is likely to continue growing through the 2020s, driven by the ongoing expansion of digitalisation, cloud computing and artificial intelligence.

Though the emphasis is often – deservedly – placed on products and solutions, SaaS pricing should not be overlooked, as it is a lever that lies at the core of a company’s success and assists in defining and enhancing its business model. SaaS pricing is often equated with recurring fixed subscription-based licence fees, but we show why the reality is more complex. We also explore the reasons behind some trade-offs with more usage-based models.

I. Pricing in SaaS: An essential contribution to the success of a company’s business model.

a. Value-based pricing centered around the client.  

In the SaaS industry, there is a relatively clear tendency to adopt value-based pricing compared to cost-plus or competitor-based pricing. It is not that there is no benchmark when setting prices or that cost structure doesn’t matter, but innovation-based offerings – supported by strong R&D – as regards to competition or that the cost structure does not matter, but the innovation-based offers – supported by strong R&D – are less subject to commoditisation and more prone to differentiation, driven by business models that have lower variable costs and are designed to scale.

Value-based pricing is customer-centric, and focused on the value delivered to clients/users. This why the triad of Positioning/Packaging/Pricing is all the more important in SaaS, whether the company has a land and expand penetration or profitability strategy [note that in this article, when referring to pricing, we think of it as a process that includes the three “Ps” mentioned above].

b. Pricing drives adoption, ARPU, retention and upsell.

Though the literature on customer acquisition and retention is far more developed than that on pricing, the latter indirectly contributes to the former.

Decisions in terms of pricing have an impact on client adoption, in setting more tailored combinations of offers, making sales and marketing processes easier and lowering the cost of acquiring clients (CAC).

With consistent positioning and segmentation of the client base, a company can expect to capture a greater share of the value created and thus achieve a higher ARPU (average revenue per user).    

For the same reasons, and without neglecting the quality of the products and competencies of the workforce – especially customer success and account managers in the present case – an appropriate pricing results in less churn and higher upsell/cross-sell potential.

Pricing impacts personnel motivation, be they product, marketing, sales, account executives or customer success employees. In the case of sales especially, pricing becomes all the more efficient as it is perfectly understandable and appropriately tied to variable compensations.

As a result, pricing can be considered a key determinant to the evolution of LTV/CAC ratio, as the life-time-value is determined by the ARPU and churn, and as the CAC results from a more efficient acquisition funnel stemming from easier adoption.    

The impact of pricing on recurrence is more subtle. Though flat-fee subscriptions may appear to be the ideal option, as they notably generate recurring revenues – particularly valued by investors –, more variable, consumption-based models offer other benefits, which are increasingly valued by both software providers and their clients. Indeed, the quest for recurring revenues from fixed fees should not be pursued at the expense of LTV/CAC considerations.        

II. The growing adoption of usage-based pricing.

a. “User-based” vs. “Usage-based” pricing.

SaaS pricing has historically been defined more as a fixed flat-fee per “user” within the framework of a subscription model. When we talk about a “user”, we are referring to a broader definition, which includes a seat, an account (or even an “active” account etc.), and other metrics such as a company’s size, the sector in which it operates, and more generally any other factors not directly related to the usage of the service (1). User-based pricing has the merit of being simple and predictable for both the software provider and its clients, while revenue can be scaled as the number of users grows, both with the existing client base and new clients.     

Usage-based models scale with the actual consumption units of a product – also sometimes referred to as “pay-as-you-go” or “pay-per-use” (1). These pricing schemes involve metrics that can be directly tied to the customer’s perceived value: gigabytes of data, number of transactions processed, number of SMS or emails sent etc. These models used to be common among “Infrastructure as a Service” (IaaS) companies (AWS, Microsoft Azure, Google Cloud, Digital Ocean, OVH…). Though less predictable than user-based models, they facilitate easier adoption for clients while reducing cost-related barriers to entry, and they can better link the price paid (cost incurred) to the value received. On the software provider side, they help to better absorb potentially higher variable costs of sales (hosting, data, customer success…), while unlocking the potential upside.

The frontier between user and usage-based models can be relatively porous. Within the framework of “tier-based” models, packages can refer to a user, while encompassing usage components in the proposed features (through ranges or incremental volumes).   

As shown in the “State of Usage-Based Pricing Report” conducted in Nov.21 by K.Poyar and S.Kalevar of OpenView, there is an interesting trend of larger SaaS companies moving to usage-based pricing, at least in hybrid forms.      

b. The growing contribution of usage-based components in SaaS pricing.

In their report, K.Poyar and S.Kalevar investigate how SaaS companies price their products (2). They found that 45% of SaaS companies deliver their product through usage-based schemes (compared to 34% last year), with half employing “largely usage-based pricing” and the other half using more hybrid models that include a portion of fixed subscriptions.

Usage-based pricing is still not relevant for some products, business models and types of clients, especially those who are likely to continue to favour predictability – notably when usage metrics are too variable or too sensitive to the marginal costs of consumption. Still, the growing trend towards more usage-based pricing reflects a certain convergence of interest among buyers and sellers for more flexibility and consistency in the relationship between pricing/cost and value. Also, as the software industry is increasingly driven by the automation of manual processes and artificial intelligence, user metrics appears to be becoming less relevant, as the more successful a product becomes, the fewer user seats a customer actually requires.

The expansion of usage-based pricing is also going to be linked to the growing success of “product-led growth” SaaS business models (a notion also coined and promoted specifically by OpenView), which consists of a “go-to-market strategy that relies on using your product as the main vehicle to acquire, activate and retain customers” (see Wes Bush’s book “Product-Led Growth”).

The evolution of SaaS pricing should be closely watched in the coming months/years, as it is an interesting signal of more underlying changes in business models, which will redefine not only the way customers use and consume products or investors value a target, but also how a SaaS company itself is organised and the role and interaction of its different functions.     

Pricing in SaaS – User-based vs. Usage-based models:

Notes:

(1) To underscore the separating lines between pricing models, in our article, we propose a distinction between user- and usage-based pricing, referring in both cases to extensive definitions. These definitions may vary in other studies, and in some cases, user- and usage-based pricing can relate to consumption-based models (the increase of in the number of users is then assimilated to an increase in the consumption of the solution), or tier-based pricing can be presented separately.  

(2) The OpenView report is based on a survey encompassing around 600 SaaS companies, of which 51% are based in the U.S. and 18% in Europe.

Sources of information:

[The views expressed above are those of IterAxon, based on (i) the knowledge and experience acquired from multiple engagements in the SaaS sector and (ii) on the analysis/studies conducted by third-party sources, as mentioned below in the “relevant and recommended sources” section.

IterAxon is a financial consulting firm specializing in innovative, fast-growing sectors, including SaaS: When performing financial due diligence or providing financial expertise to its clients, understanding the pricing strategy is an essential prerequisite to a better appreciation of the business model and the identification of growth and profitability levers].     

Technology and inflation: How they influence each other, in the short and longer term.  

Soaring prices since Q2’21 have let to increasingly intense debates on the transitory vs. persistent nature of particularly high levels of inflation, along with the existence of a Tech bubble and the associated risks of correction or bursting. Inflation (CPI – Consumer Price Index) reached more than 7% and 5% in Dec.21 on an annual basis, for the U.S. and E.U., respectively. During Jan.22, the NASDAQ (-100) – where most of the largest tech companies are listed – fell by (8.5)%, compared to +27% for FY21, amid concerns of expected interest rate hikes by the U.S. Federal Reserve (the “Fed”) to curb inflation in the coming months.

Since the probability, amplitude and timing of potentially more significant corrections in the valuation of Tech stocks (whether listed or non-listed, through lower valuation in capital raising) are particularly difficult to estimate, we propose to examine – in order to put these subjects into perspective – the relation between inflation and technology, in the short and longer term.  

A. The sources of inflation and their impact on Tech dynamics:

(a) CPI (Consumer Price Index) evolution (annual rates), Jan.2017-Dec.2021

Source: OECD (1)
  1. The “post-Covid” rise of some prices and their impact on Tech companies’ dynamics and profitability.

One of the most significant price increases in the “post-Covid” period (say since the end of major lockdowns in 2020) affecting Tech companies pertains to semiconductor chips, which rose by 15% in FY21, mainly due to insufficient production capacities (higher raw material prices, supply chain bottlenecks, geopolitical tensions between the U.S. and China over Taiwan, unfavorable extreme weather conditions, delayed investments in new foundries…) and higher than expected demand (electric cars, 5G, graphics cards…), which grew by 25% in FY21. Though chip companies generated high revenues and profits in FY21, the price increases directly (when representing a direct production cost component of a company) and indirectly (through the tension exerted on other industries’ prices) impacted the margin rates of other hardware companies, especially those that could not entirely pass the increases on their customers, and to a lesser extent software companies (higher costs for IT equipment – like computers…- and in some cases IT hosting services). Tensions in the semiconductor sector are expected to persist in the coming months, but should gradually ease by the end of FY22, as most of the major chip players have planned major investments in their capacities, and provided that geopolitical tensions over Taiwan, which operates the most significant international wafer foundries (the company TSMC accounts for more than half of the market), do not escalate further. Overall, and all things being equal, we do not expect any significant impact on Tech software companies’ long-term profitability and valuation.

Substantial increases in Energy price increases contribute to most of the inflation since Q2’21 (directly or indirectly), rising by more than 20% on an annual basis in late 2021. The increase appears to have little to do with the transition to clean energy, but more with higher-than-expected demand (due to a rapid global economic recovery recovery, a colder and prolonged 20-21 winter in the Northern Hemisphere…) and lower-than-required supply (due to a decline in oil and gas investments in recent years as a result of price collapses in 2014-15 and 2020, delayed maintenance work in 2020, logistics bottlenecks, and geopolitical instability).   

(b) Energy CPI evolution (annual rates), Jan.2007-Dec.2021

Source: OECD (1)

Though ambitious carbon reduction objectives might result in structural tension on prices in the long run, the current surge in energy price since 2021 appears to be cyclical. Therefore, we do not expect it to remain at this level of growth for a long time (prices may remain high in the coming months, but the rate of evolution should slow down, as evidenced by future crude oil contracts). As far as Tech companies are concerned, the surge in energy prices may adversely impact FY21 and H1’22 profit margin rates for hardware (mainly with higher manufacturing costs) and to a lesser extent for software (hosting costs…), at least when their ability to pass them through to customers is limited. However, higher energy prices in themselves should not have a significant and direct impact on the growth outlook of software companies in particular.

Increases in semiconductor and energy price increases may impact the growth and profitability of Tech companies (especially hardware), but there are some levers to mitigate their effects in the coming quarters. On the other hand, fiercer competition to attract and retain technology talents may continue to exert more structural pressure on personnel wages, profitability and growth, with overall inflation reinforcing the trend (with greater demand for wage increases). Wage increases in Tech software companies may in itself have limited effect on overall inflation. Particularly in SaaS or marketplace companies, the nature of the business models and scaling configuration may not necessarily result in a pass-through to pricing. Also, wage growth – though fuelled by historically low unemployment rates in the U.S. and E.U. – may turn out to be an insufficient driver of talent growth, and should be combined with other non-financial considerations.

Indirectly, the three effects we have chosen to underscore – semiconductors, energy and wages -, combined with the favourable fiscal and monetary policies since 2020, should expedite decisions to increase interest rates to address higher inflation.              

2. Inflation, expected rate increase and their impact on Tech valuation.

Inflation trends are quite similar in the world’s major economic regions. Still, it is interesting to note that the increase in the E.U. has lagged that in the U.S. for several months, with a lower magnitude too (both being driven by energy and food) (see graph (a)). As of early Feb.22, the U.S. Federal Reserve plans to increase its interest rates several times over the course of FY22 to fight inflation, whereas the European Central Bank (E.C.B) is not considering doing so, because of differing economic recovery cycles and economic situations among Eurozone members (some of which are particularly indebted). 

Whether the E.C.B. increase or not its rates, the Tech industry as a whole should be impacted by the Fed decision, as financial markets reactions already showed. As higher interest rates result in (i) lower expected discounted cash flows (all things being equal, the discount rate increases due to higher interest rates and thus weighted cost of capital) and (ii) lower investment (for the same reasons, with a higher opportunity cost), the stock valuation of Tech companies listed on stock markets dropped in Jan.22 (as discussed when referring to the NASDAQ; see above). Within such a configuration, financial markets favor value vs. growth stocks, which has been illustrated especially by the convergence of the performance of Berkshire Hathaway funds (Warren Buffet) and ARK Innovation ETF (exchange traded fund) (Cathie Wood) over the Dec.19-Jan.21 period (2).  

The evolution of the stock valuation of Tech-listed companies might not necessarily impact directly and to the same extent non-listed ones (backed by venture capital or growth funds especially). Still, should lower valuation in stock markets and higher interest rate expectations remained, the subsequent fundings shall be carried out at lower valuations and investors shall be much more selective on their targets (even though dry powder remains high) (please refer to the interesting article on Tomasz Tungusz’s blog (3)).            

We saw that higher inflation may impact Tech growth dynamics, profitability and valuation on the short and long term, with effects notably depending on expectations as regards to interest rates evolution. In the meantime, technologies and innovation may provide structural disinflation forces to the economy.

B. The structural impact of technology and innovation on inflation.

  1. The Moore Law and the structural decrease of chip prices.

According to the empirical Moore Law, every 18 to 24 months, semiconductors chips would get twice as powerful for the same cost (or to put it more precisely, the number of transistors doubles at constant cost). Though impacted by supply chain disruptions over FY20-21 (with a visible effect on prices in FY21), the positive contribution to disinflation from smartphones, laptops, computer prices etc. should remain over time (though potentially at different paces).    

2. Unlimited economies of scale in SaaS.

In most cloud-based Software as a Service (SaaS) business models, the companies are benefitting from limited variable costs and increasing returns to scale. As MIT professors A.McAfee and E.Brynjolfsson said, the digital disruption has been driven by the economics of free, perfect and instant: “The marginal cost of an additional digital copy is (almost) zero, each digital copy is a perfect replica of the original, and each digital copy can be transmitted across the planet virtually instantly” (4).  The production can then scale efficiently and almost unlimitedly to rapidly meet additional demand, wherever it comes from. These forces can be strengthened by machine learning as well as network effects, contributing to additional productivity gains driven by larger markets, users and data. Nevertheless, machine learning and network effects have a tendency to favor the emergence of winner-takes-all actors with higher pricing power, which ultimately could mitigate the positive effect on disinflation stemming from the scaling potential of software companies.    

3. Marketplaces and market efficiency.

Allowing more transparency and comparisons between prices and offers, combined with more intense competition between sellers / buyers / workers, digital marketplaces contribute to a more efficient matching of supply and demand and more efficient markets. Notwithstanding the side-effects of an unbalanced relation with some platform behemoths in some cases, marketplaces exert a positive influence on inflation in limiting – most of the time – the pricing power of its participants.  

4. Manufacturing and supply chain productivity gains.

As shown with the previous effects resulting from decreasing chips costs and the scaling potential of software companies, the positive contribution of technologies and innovation essentially lies in their ability to foster productivity gains in tech-related and non-tech-related industries, increasing the growth potential of worldwide GDP (Gross Domestic Product).

Enumerating the list of all the innovations that could significantly improve productivity is not the aim of the present article. Still, the potential of emerging technologies appears to be very promising on manufacturing and supply chains especially. Fueled by artificial intelligence / machine learning tools, automation should leap in the coming years, driven by more efficient robotization and autonomous vehicles. Also, the production of electricity based on renewable energies and lower costs as regards to batteries (the costs of lithium-ion storage dropped by (19)% p.a. over the 2010 decade (4)) should contribute to lower further production and transportation costs (4).

5. Information abundance.

The measure of inflation (and GDP) draws a veil over the wealth provided by the abundance of information the internet provides to us. Today, everyone can have access – for a very reasonable price – to data and knowledge that used to represent a non estimable amount of money in the past (and that was not even available, whatever the price). This information enables us to better understand a specific field of knowledge, to improve our professional skills, to share our thoughts with people all over the world, and enjoy music and videos with the almost the sole constraint of time. Obviously, this do not directly impact food or housing prices (nor the time constraint of a 24h day). Still, we should never forget how rich we are thanks to digital innovations, and how the potential to use this knowledge could provide us with a more sustainable growth.       

Relevant sources / notes:

The contribution of artificial intelligence to software companies’ profitability

The emergence of cloud computing and big data, fostered by improved computing and storage resources capacity, has helped to remove traditional constraints on scale and profitability. While these potential impacts are further enhanced, the growth of software companies’ use of AI in their offerings has a more equivocal impact on scale and profitability, that should not be overlooked. We explore below the potential obstacles that could arise and which should be addressed.

The focus of our article is on companies that are using AI (including machine learning – ML -; we refer to both when we speak of “AI”) as the dominant component of their offerings – albeit not at a too early development phase. Note that such companies often deploy their model in a sort of subscription-based SaaS framework. For more clarity, we propose a semantic distinction between AI-powered models – AIaaS – and more traditional standard SaaS models

I. A definition of profitability measurement in AIaas

Though EBITDA tends to be a widespread aggregate of profitability for most business models, in a SaaS context and for the purposes of our analysis, we will refer to gross Margin (GM), which is a more relevant indicator for assessing the issues related to scalability. GM can be defined as revenue minus COGS (“costs of goods sold”), namely direct variable (or semi-variable) costs.

In a “standard” SaaS model, COGS principally encompasses (i) cloud infrastructure costs, (ii) customer success and (iii) professional services, with GM typically standing around 70-80% of revenue (and even more).

For AIaaS, the main categories of COGS are basically the same, but beyond cloud computing costs, a company need to purchase specific data to feed its models. Also, customer success management (CSM) and professional services functions may be expanded, and more generally, part of the costs of R&D employees can be considered as direct personnel costs.

II. Cloud computing and data procurement

In the case of AIaaS, cloud computing operations tend to be more complex, with a higher volume of data, which is often less structured or unstructured, particularly heavy to store and/or more difficult to process. As a result, both storage and processing costs are typically higher.

Meanwhile, training data is not usually a one-time charge (in the latter cases, to be recorded as R&D charges), and retraining it can often be considered as an ongoing cost (to be included in GM) (the concept of “data drift” reflects this underlying tendency of many AI models, which require constant training over time or else their performance will be crippled). This is all the more conspicuous when a company’s core value added depends on its capacity to provide updated information to its clients. To illustrate, let’s take the example of a FinTech company that provides its clients (investors) comprehensive information regarding sentiments, emotions, and rumours surrounding a specific investment or targeted company. In such a case, clients require fresh news so they can act quickly and appropriately.

Following this logic, the purchase of specific data – through a data licence subscription to access specific databases, for instance – will be similarly addressed with a classification in COGS when such a purchase needs to be made on a recurring basis, and in R&D when it may only be made for a certain period of time within the AI-driven service lauch phase. 

Still, to mitigate these effects on AIaaS profitability, it appears likely that storage and computation execution capabilities will improve over time, and the associated costs will decrease. However, whether the scale and rapidity of these effects will be sufficient to bring AIaaS profitability patterns close enough to the more “traditional” SaaS model remains to be seen.

III. AI and irreducible human intervention

Data set training for AI models is still far from being a fully automated process. Human intervention is often inevitable to manually clean and label data sets. Following on from the data drift phenomenon previously mentioned, dedicated employees are expected to remain continuously mobilised, even beyond the usual “maintenance” (bug fixing, etc.).

Beyond these structural requirements, some AI-based services may require humans to be mobilised in “real-time”. This is particularly the case for (i) social media moderation with human reviewers (some Facebook employees recently admitted – in Nov.21 – that it is inherently difficulty for their Group to cope with the issue of content moderation, despite having AI tools backed by an apparently insufficient human workforce) and (ii) autonomous vehicle systems with remote human monitoring.

Finally, if the models are too complex – not sufficiently narrowly defined -, “edge cases” may result in a multiplication of human interventions, particularly at each client onboarding, but also later on.

IV. Addressing the scaling and profitability of AI-based models

Providing a range of typical GM rates in AI-dominated SaaS models can prove to be a relatively inaccurate exercise. The contribution of AI to business models can vary dramatically, and many companies are still in their AI ramp-up phase. Sometimes ratios of 50-60% are highlighted, but we also observe companies with much lower ratios. Whatever the impact, whether positive, neutral or negative, these companies at times have no choice but to integrate AI technology, if only to survive.

In their excellent article “The new business of AI”, Martin Casado and Matt Bornstein (Partners at VC firm Andreesen Horowitz) point out that AI companies appear to combine characteristics of SaaS (say “pure” SaaS) and Services business models. Though apparently structurally less profitable than traditional SaaS models, they recommend adopting a strategy that combines the best of both models, specifically (i) eliminating model complexity by favoring single models over unique models per client, (ii) narrowing problem domains to minimise persistent edge cases, (iii) conservatively confronting the economic reality of higher variable costs than initially expected and (iv) planning for change in the tech stack as better tools emerge for automated model training and standardisation of developers’ workflows.

Identifying revenue erosion and understanding its main sources

 

Churn is basically the impact of losing clients (or contracts), expressed either in terms of the number of clients lost – “client churn” – or lost revenues as a result of lost clients – “revenue churn”. Downsell is defined as the diminution of revenue on existing clients (or contracts). The concept of “churn” may sometimes implicitly include downsell. Additionally, a distinction can be made between “gross” and “net” churn, the latter encompassing the offsetting impact of upselling effects (upgrades, expansion,  etc.). Here, we will focus our analysis on “gross” churn, emphasising the impact of client losses (with an analysis that applies to downsell in most instances).  

Although churn is referred to as a sort of “negative” force, it is not necessarily bad in itself. For instance, it can result from a company’s voluntary strategy to focus on certain categories of clients or a specific type of growth. Still, understanding the way it works and how to measure it appropriately definitely contributes to improved business and investment decisions.

I. Formulating the most relevant definitions

Though freedom and creativity are permitted in setting the methodology for calculating churn, analyzing churn is all the more relevant when (i) revenue is recurring by nature, (ii) the loss is addressed on a client basis rather than on a contract or project basis and (iii) the primary focus is on the loss expressed in terms of value.  

  1. Recurring vs. non-recurring revenue

Determining churn is mostly useful in business models dominated by recurring clients and/or recurring contracts, which is typically the case in SaaS subscription-based models. In such cases, as revenues per client/contract are supposed not to be reversed over time, losing a client in particular can be considered as permanent (and not offset a few months later by new recurring revenues from the same client). In the case of an interruption of revenue with a client for say a few months, the notion of “churn” should be replaced by alternative concepts (downsell, contraction…). In SaaS models, the non-recurring part (professional services: integration, training…; proof of concepts etc.) of revenues should then be excluded from the calculation.

In some other business models, the frontier between recurring and non-recurring revenues is not that obvious. Some clients may indeed contribute to revenues every year, and therefore be considered “recurring” clients, while in the meantime their contribution levels are relatively variable. In such cases, it can still make sense to monitor churn, though its calculation should be adapted accordingly (depending on the variance of revenues, a “volume” approach – client churn – is preferred).  

2. Revenue per client vs. revenue per contract/project

We believe that churn calculation, particularly in a SaaS environment, should be performed on a client basis rather than a contract or project basis. Indeed, the loss of a client is not as easily reversible and less complicated to identify, whereas the loss of a contract with an existing client may be offset by the signing of new contracts. Also, owing to the complexity of the following up on multiple contracts per client, we often observe that it is not possible to precisely ascertain whether a new contract can be considered as the replacement of another that is ending, or as a brand new contract with a completely different offer with no overlap. However, we’re not saying that monitoring churn by contract or project is not useful. With all the required details, it provides interesting information about the dynamics of the evolution of the offer per client, and is particularly well suited for scrutinising dowsell drivers.  

3. Value vs. volume-based approaches

The churn rate can be calculated uising different methodologies. One of the most basic, a “volume-based” approach, is to measure the ratio of the number of clients lost over a certain period of time relative to the number of clients at the beginning of the period (“client churn”). Note that the period under consideration should not be so long as to skew the calculation by losses related to new clients acquired during the period. While this approach is interesting for analysing a company’s capacity to retain its clients, it does not provide any information regarding the real impact of churn on revenue. Indeed, a company can lose a significant proportion of its clients while still maintaining a strong base of recurring revenue. That is why the priority approach, especially in a SaaS-based environment, should be to focus on revenues expressed in monetary units.

From a financial analysis perspective, the churn rate calculation derived from lost recurring revenues is the most interesting one. In SaaS models with monthly recurring revenue (MRR), it involves measuring the ratio between the MRR at the end of a period and the MRR at the beginning of the period (the yearly rate expressed over a 12-month period is the most common). Note that this approach, when combined with the previous volume-based one, provides useful information on loss concentration.  

Below is an illustration of such a calculation, both on a monthly and yearly basis – only available in (n+1) for the latter.

In the above example, it is particularly interesting to see that yearly churn rates can vary considerably from month to month. With such fluctuations, the life-time value (LTV), derived from a “normative churn rate”, should be carefully assessed. Where there is too much discomfort regarding the normative level of the churn rate, we consider that any LTV calculation should be deemed irrelevant (an alternative approach would be to provide information in terms of ranges).

II. Deciphering churn through appropriate angles:

Now that the basic definitions have been clarified, let’s get into more detail about the analytical approaches within the framework of a subscription-based model, where churn expressed in terms of clients and recurring revenue. Note that the following approaches can (should) be combined.

  1. Size of clients

When broken down by size ranges expressed in terms of recurring revenue (MRR), client churn analysis helps to better understand a company’s strategy and/or the influence of the market. We often observe that churn is more concentrated on lower MRR per client, which can primarily be explained by (i) an offer less adapted to smaller clients or (ii) a company not sufficiently focusing on what are considered to be less profitable clients. On the contrary, even if less intuitive, a company may sometimes choose to focus more on small or medium-sized clients, with larger ones being considered not profitable enough in the context of either less favourable pricing power and/or too complex (and costly) solutions to be delivered.

2. Type of offers and products

Each loss of a client (or contract) can be linked to a dominant type of offer or product. By carefully analysing churn, a company can better understand which offer(s) to focus on. To take it further, in addition to the type of offer, it may prove particularly interesting to analyse the funnel of acquisition (for paid client) and determine for instance whether a client who first experienced a freemium or trial offer is less prone to churn.

3. Geography

Some clients from a particular geographic area, irrespective of their size or the type of offer they have subscribed to, may be more inclined to churn, notably due to certain local conditions, for instance, competition, regulation or the economic environment.

4. Vintages

By determining cohorts based on the year the client was acquired (vintage cohorts), we can identify whether churn is more related to clients, gained in specific years, as well as calculate the different churn ratios over the lifetime of a contractual relationship. To illustrate, in the latter case, churn rates can (i) be more important in the early years, when some clients have sufficiently exhausted the basic features of the services provided by the company and are evolving towards other, more complex alternative solutions, and (ii) decrease when the client base now consists of more loyal clients.    

5. Reasons

The classification of churn by client size, type of offers, product, geography or vintage, provides a particularly interesting analytical framework for identifying the underlying reasons for churn. More explicitly, we recommend investigating client churn whenever it occurs (with more diligence when the loss of clients is more significant – note that some companies provide a specific form in which they ask each churned client to clarify their motivation for cancelling) and classifying them according to relevant categories, including: (i) better offer from competitors (price, package…), (ii) a solution that is not reliable enough, (iii) poor customer service, (iv) a solution that is no longer useful, (v) the client is in financial difficulties or (vi) financial consolidation within a new group (the last two are often highlighted in some companies’ communication to highlight the quality of the retention policy).    

Below is a summary of the different definitions, approaches etc., which we propose in our article:

Blurred Monthly Recurring Revenue (MRR) – A waterfall view

Monthly Recurring Revenue (and its annual equivalent ARR) is a well-known financial aggregate in the world of cloud-based SaaS companies, a sort of holy grail in most subscription-based software valuation models. Though a lot has been said about its definition(s), in this article, we share our views on some of the underlying subtlelties that we have noticed during our manifold audit and advisory engagements over the past few years. As the list below is not intended to be exhaustive, our aim is to provide a helpful methodology and draw your attention to the main pitfalls, with a view to better understanding the MRR framework, through a classification in ascending order.    

Traditionally, an initial distinction is made between “MRR” and “Committed MRR (CMRR)”, with the difference for the latter being mainly the inclusion of forthcoming signed contracts and other known expected variances. Both categories of definitions will be addressed separately.

Monthly Recurring Revenue (MRR):

1. Billed MRR with non-recurring discounts:

In some leading SaaS subscription management tools, MRR misleadingly includes discounts according to billing, without any consideration as to their temporary nature. In our opinion, when accounting for discounts, one should consider whether they can be regarded as recurring or not. If they are not recurring, we recommend removing them from MRR. In the following definitions, we assume that discounts are addressed appropriately.

2. Billed MRR with new contracts/projects not being recognised on a full-month basis:

This is an underestimated calculation that we notably see in some early stages companies. It entails aligning MRR with the monthly recurring revenue recognised as per accounting (in the latter method, if a new contract starts in the middle of the month, only half of the monthly recurring revenue is recognised), rather than taking into consideration the additional revenue generated on a 30 (or 31) days basis.

3. “Standard” billed MRR:

This is the most common standard definition of MRR (especially in dedicated SaaS subscription management tools), corresponding to the monthly recurring billing – expressed on a full-month basis (or annual/quarterly billing divided by the number of corresponding monthly periods, respectively 4 or 12 in the present case). This MRR is also the most well suited to analysing its dynamics over time (new, churn, upsell, downsell etc.).

4. Billed and unbilled MRR:

Though recurring revenue is essentially regularly billed, part of it can still be invoiced post the time of the analysis, either because (i) the customer, while still enjoying the service, has not been billed (for whatever reason) and no contract termination or non-renewal is to be considered, or (ii) the billing related to a newly implemented contract has not yet started. As this definition includes all recurring revenue, not only those billed, it can be considered more relevant, though MRR is then less easy to monitor on a regular basis.

5. Billed and unbilled MRR + POC (proofs of concept):

Proofs of concept are not included in MRR, as they are supposed to be non-recurring revenues. Still, when they are spread over a certain period of time, and subject to a contract formalisation that is closer to subscription fees, the line between it and pure MRR may narrow.

As a continuation of the previous definitions of MRR, CMRR extends the view into the near future by taking into account the expected secured MRR, which is all the more important in the context of strong growth. 

Committed MRR (CMRR):

6. CMRR accounting for signed contracts and expected net upsell/churn:

In our view, this is the most relevant definition of CMRR, as it takes into consideration secured new contracts that will start in the coming weeks or months, while also encompassing the known evolution of existing contracts, both in terms of expected churn or expected net upsell.

7. CMRR accounting for signed contracts only:

This definition is also widespread, especially in early stage companies and/or when net upsell and churn on existing contracts are difficult to estimate.

8. CMRR accounting for signed contracts and pipeline:

This aggregate takes into account expected “probabilized” non-signed contracts in the pipeline. We consider this definition to be relatively aggressive.

In summary, we consider definitions #3 and #4 to be preferable in terms of MRR, with #4 being the most relevant on a static basis, while #3 (often quite close, if not comparable, in terms of amounts) is more practical to follow on a regular and dynamic basis. As for CMRR, #6 proves to be the most relevant, and is particularly useful when valuing a company.

How to define marketplaces’ profitability and identify its main drivers?

How to define marketplaces’ profitability and identify its main drivers?

Online marketplaces are basically platforms that aim to frictionlessly connect buyers and sellers (or at least multiple counterparts). However, given their tremendous growth over the past decade, their profitability is less easy to assess, and margins need to be determined on several levels.

As with most emerging digital companies, EBITDA does appear at first glance to be the most relevant aggregate to analyse profitability, particularly for fast growing companies. In this article, we will focus on made on “Gross and Direct margins”, i.e. revenue minus what can be considered as costs directly contributing to the performance of the core activity (gross margin includes the more variable costs, and direct margin the other direct costs). Since marketplaces can also take many different forms, we will present the basic and broader case of a commission-based B2B or B2C model (as a reminder, though marketplaces contribute to the expansion of e-commerce, we distinguish them from pure e-commerce enterprises, the latter notably deriving their profitability from “margins” rather than “commissions”). The framework presented here-below is a proposed analysis grid, and will be adapted to each business model and marketplace company.  

Gross Merchandise Value (or Gross Merchandise Volume (GMV)) is the most “top-line-positioned” indicator. It is basically determined as the volume of products (or unit of services) sold through the platform  multiplied by the corresponding unit selling prices (or orders multiplied by the average selling orders prices). Though GMV in itself gives a relatively limited indication of the profitability pattern of a marketplace, the analysis of its growth drivers can provide useful information regarding the evolution of margins, not only in terms of values, but also rates.

GMV evolution can be broken down into 3 effects:

  • Volume of products,
  • Prices per product,
  • Mix of products.

The control of these drivers is more-or-less shared with suppliers (providers) and also relies on market dynamics and characteristics. All these parameters of effects impact margins, but only the price and mix have an influence on margin rates (volume can have an indirect impact through discounts and price reductions). Note that a relatively similar (and simpler) analysis can be made using the volume of orders and average selling prices per order.

The “real” Revenue of the marketplace comes from commissions applied to the monetary volume of transactions (GMV) (however, it should be noted that marketplaces can generate other types of revenues, such as subscriptions from premium accounts, advertising, monetisation of data…).

The “take rate” (aka “Rake”) is calculated as the ratio between revenue and GMV (a sort of commission rate). This rate can vary greatly from one “sector” to another, and over time. Its main drivers (both in terms of level and evolution) can be summarised as follows (some of them may be interrelated):

  • Value proposition: the more services and risk the platform provides/bears, the higher the commission rate can be;
  • Competition comes not only from other platforms but also from alternative distribution channels for suppliers;
  • Network effect: the more the platform is used by buyers and/or sellers, the more essential it becomes to its participants;
  • Suppliers’ profit margins and price sensitivity.

Now that we have a clearer understanding of the structure of revenue, an essential contributor to profitability, let’s take a deeper look at margins.

There is no standard approach to the breakdown of margin aggregates, mainly due to the diverse nature of business models. Very often, we observe that two to three sub-aggregates may prove to be relevant (denominations such as “Contribution margin 1”, “Contribution margin 2”, “Contribution margin 3” are widespread). Yet, a useful and simple approach is to distinguish between (i) “direct variable costs” related to transactions and (ii) “acquisition costs of clients”. Let’s call the former “Gross Margin” and the aggregate stemming from the latter “Direct Margin”.

In the following definition, the Gross margin indicator bears some resemblance to the one often used in SaaS analytics, albeit with different weighting. Contrary to e-commerce companies, costs of goods sold do not correspond to the costs related to merchandise (implicitly embedded in the GMV in this case). Rather, COGS may encompass (i) direct IT & hosting costs (servers, network bandwidth, power etc.), (ii) customer success and (iii) payment gateways (the latter being less common in subscription-based SaaS models).

The increase in Gross margin rate is thus mainly related to (i) the improvement of the “take rate”, (ii) the evolution and better absorption of direct IT costs, (iii) productivity gains in terms of customer success and (iv) lower payment charges per transaction.

In some cases, all the more so when intermediation can  be considered as “managed”, additional categories of direct personnel costs, which are more related to the realisation of the core services delivered by the platform, will require the introduction of another level of contribution margin (denominations such as “Contribution Margin 1” (CM1) or “Contribution Margin 2” (CM2) – in the latter case when GM refers to CM1 – can be used).

In recurring subscription-based models, client acquisition costs are excluded from gross margin (or direct margin) as such costs usually don’t usually recur over time once the company has the clients (without neglecting the possible existence of renewal costs). In marketplace models, transactions rather than subscriptions tend to drive revenues. The costs of acquiring clients (whether they are buyers and/or sellers) are then more related to revenue, and therefore more “variable”. Since the relation between revenue and acquisition costs is not as direct as the one stemming from COGS (or direct personnel), the corresponding costs are presented in a different aggregate of direct margin (or Contribution margin 1 or 2 or 3, depending on the previous denominations).

Direct costs added to direct margin consist of sales and marketing expenses, at least those considered to be “direct” (Google Ads, sales employees’ wages etc.). The classification of sales and marketing expenses as direct costs is not so obvious, and part of them can be considered as indirect costs (all the more so because management/investors would want to present more favourable figures). Consequently and in summary, the level and evolution of marketplace direct margins (or contribution margins) are accounted for by (i) the take rate, (ii) gross margin – cost of sales – drivers (direct IT-related costs, customer success, payment) and (iii) CAC efficiency.