Weekly newsletter

Templates and how-tos to help you fulfil your potential

The AI Platform shift

This article is part of a series on OpenAI and generative AI products:

  1. How to Use ChatGPT: Understanding ChatGPT and OpenAIhere
  2. The AI Platform shift → this one. 
  3. Using ChatGPT and generative AI in product development → [coming soon] 
  4. Prompting ChatGPT → [coming soon] 
  5. Case studies: how tech companies are using ChatGPT → [coming soon] 


The Platform shift

“I see it as an extraordinary platform shift. Pretty much, it’ll touch everything: every sector, every industry, every aspect of our lives. So one way to think about it is no different from how we have thought about maybe the personal computing shift, the internet shift, the mobile shift. So along that dimension, I think it’s a big shift.”

Sundar Pichai, Google and Alphabet CEO, Decoder Exclusive: Google’s Sundar Pichai talks Search, AI, and dancing with Microsoft

[Bill] Gates insinuates that the PC revolution, the Internet revolution, and the AI revolution are discrete events, but they can also be viewed as three applications of the defining economic feature of digitization — zero marginal costs — to information:

  1. The PC allowed for zero marginal cost duplication of information; this is what undergirded breakthroughs like word processors and spreadsheets and the other productivity applications Gates specialized in.
  2. The Internet allows for zero marginal cost distribution of information. This led to markets based on abundance, not scarcity, giving rise to Aggregators like Google.
  3. AI is zero marginal cost generation of information (well, nearly zero, relative to humans)…generative models unbundle idea creation from idea substantiation, which can then be duplicated and distributed at zero marginal cost.”
Google I/O and the Coming AI battles, Ben Thompson, Stratechery

The internet is 40 years old.  We’ve been through the PC age, the internet age, mobile age, and now, the AI epoch is beginning.  The hype cycle regarding AI is currently so noisy that no one’s social feed is safe from the topic, the White House has got involved and the feeding frenzy has got to the point that investors are already writing about how they’re not going to invest in LLMs and why.

The Mobile maturity curve
Maturity curve of mobile as a platform, updated from “Mobile is Eating the World”, Ben Evans

the AI maturity curve
Likely that AI follows the same cycle as mobile, the last platform shift: we are currently in creation phase
With thanks to “Mobile is Eating the World”, Ben Evans
the AI platform shift
The AI maturity curve
With thanks to “Mobile is Eating the World”, Ben Evans

The AI epoch has arrived when we’re still living through the societal aftershocks of the internet’s gift of zero marginal cost distribution of information; markets based on abundance, rather than scarcity of information.  The effects of this are well known: from Silicon Valley Bank’s fastest bank run in history, to widespread disinformation, or the fundamental reshaping of entire industries, namely the shift towards social over television or streaming in younger generations (Tiktok > Netflix), supported by millions of content creators making and distributing content (largely) for free. 

But what happens when the enemy isn’t Tiktoks, but AI generated content?  What happens in a world characterised by zero to extremely low cost generation of information, built on top of low cost information distribution? 

Only the future will tell for sure, but based on the last 6 months we expect the development and evolution of AI as a platform to follow very similar patterns to PC or mobile platform development.  The major battleground is currently between the tech giants to own the new platform real estate.  Think Android / iOS, or Google and Bing.  Building disruptive multi-million dollar businesses on top of those platforms, with both good and bad consequences, is to follow.  

We are currently in the creation phase of AI, and we will move over the next decade to a mature deployment of AI across millions of devices and users.  Despite the noise and hype around the latest plug-in or AI enabled businesses that we in the technology industry see daily on our social feeds, global AI chatbot or assisted productivity penetration on the new model is still quite low – far more people have a laptop or a phone which they use daily than use ChatGPT. 

Machine learning is already embedded in a host of products and services from the existing Google search engine to Amazon’s product results, but bot and productivity penetration of the ChatGPT / Google Search Generative Experience / Bing / Microsoft 365 / Duet type is yet to expand beyond the tech community.  Google’s new Search Generative experience and the new Bing mark the first full roll out of an AI product to the globe.

Over the next decade, we believe that AI will become ubiquitous; becoming easy to use and embedded in day to day internet experiences, commonly used software and day to day internet applications.

Despite the current noise, we believe that AI will take time to bed in: smart phones might be in the hands of the majority of the world’s population, but global penetration took over a decade to achieve.  In addition, despite the hype at the beginning of the mobile cycle, apps have by no means cornered the internet, and there are some consumers who will never move to mobile.  It’s likely that just as we do today with web and mobile we will have choices about what to use: if AI isn’t the best solution, we don’t have to use it.

“This is really a much more general machine learning question – what are domains that are deep enough that machines can find or create things that people could never see, but narrow enough that we can tell a machine what we want?”

“ChatGPT and the Imagenet moment”, Ben Evans

We can expect a similar pattern as AI matures, at least in its current form, whereby evolution is more likely than big bang disruption. Despite all the talk in recent months about Google being under threat from Microsoft, Google’s search share was still 8x of Microsoft’s Bing 5 months after ChatGPT was released.  Initial primary applications for AI are essentially time saving applications embedded into existing Big Tech products.  Dystopian applications for AI, such as facial recognition and predictive policing look likely to be regulated against in the EU.  

Big tech are directly competing in this space, and will increase their market cap by expanding total tech real estate via the new platform and via AI driven applications.  AI will take a share of tasks from humans, thereby expanding the digital landscape.  The existing Big Tech 4 (Apple, Amazon, Microsoft, Google) all stand a chance of winning big with AI, but similar to mobile but similar to mobile they will adapt to the landscape and claim their own real estate.   It’s likely that they control the majority of the landscape in coming years, despite open source hype. 

There will be a major adaptation to Big Tech business models over the next 10-20 years as this beds in, similar to the evolutions we’ve seen Microsoft go through over the last 25 years, or Apple in the last 15.

This will have implications for how we all operate in the tech industry, similar to the advent of mobile.

The web ecosystem

Microsoft, Google and Amazon

AI is currently an arms race between Google and Microsoft, both of whom have significant capability in the space.  

“One day in mid-November, workers at OpenAI got an unexpected assignment: Release a chatbot, fast.

The announcement confused some OpenAI employees. All year, the San Francisco artificial intelligence company had been working toward the release of GPT-4…The plan was to release the model in early 2023, along with a few chatbots that would allow users to try it for themselves, according to three people with knowledge of the inner workings of OpenAI.

But OpenAI’s top executives had changed their minds. Some were worried that rival companies might upstage them by releasing their own A.I. chatbots before GPT-4, according to the people with knowledge of OpenAI.”

How ChatGPT Kicked Off an A.I. Arms Race”, New York Times 

Not only is ChatGPT shaping up to be a dominant platform in its own right (an app store style ecosystem of plug-ins is entirely plausible), but it additionally allowed Microsoft (who are rumoured to own 49% of OpenAI) to kick off an offensive against Google’s search dominance, by using a ChatGPT powered Bing chatbot.  

Google is the 800-pound gorilla in search. I want people to know that we made them dance.

“Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why”, The Verge

Search is a hugely profitable business for Google, and a market which they own 85% of share globally.  Gaining as little as 1 percentage point from Google would make a meaningful difference to Microsoft’s gross margin, and thus their profits.  Since 2019 Microsoft has been successful in gaining ~3% share from Google, but this represents an opportunity to make major inroads.

[Interviewer speaking about Bing’s revenues of $11bn per year] …You want to grow that into a real business. You want to take market share. But obviously, the new technology does not have the same cost structure as the old search query. I’m sure that whatever you’re doing with OpenAI, it’s more compute-intensive, and then obviously you have a partner sitting in the middle of it. And then the monetization model is still search ads. It’s direct response search ads. 

[Satya Nadella] It’s so wonderful. Think about what you just said. You said, “Okay, here is the largest software category where we have the smallest share,” and what you just painted out is an unbelievable picture of incremental gross margin….Very few times in history do opportunities like that show up where you suddenly can start a new race with a base where every day is incremental gross margin for you and someone else has to play to protect it all: every user and all the gross margin.

“Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why”, The Verge

Google are moving to solidify their own ‘OpenAI’ capabilities as a result.  The T in ChatGPT stands for ‘Transformer’, a large language model technology first invented at Google, who have reacted to the success of ChatGPT by moving Alphabet’s Deep Mind in-house to Google.  Additionally they’ve invested c. $400m into Anthropic, a competitor chatbot to ChatGPT, partnered with Zoom, Google Cloud and Slack.

The success of OpenAI is also allowing Microsoft to take share from AWS in the cloud computing space, as companies rush to use OpenAI’s API and therefore Microsoft Azure cloud computing services.  AI and cloud computing are inextricably linked:

The generative AI boom is compute-bound. It has the unique property that adding more compute directly results in a better product. Usually, R&D investment is more directly tied to how valuable a product was, and that relationship is markedly sublinear. But this is not currently so with artificial intelligence and, as a result, a predominant factor driving the industry today is simply the cost of training and inference.

“Navigating the High Cost of AI Compute”, a16z.com

Creating ChatGPT-like answers takes a lot of compute power.  Access to scalable compute power at a low cost via a partnership with a major cloud computing player (Amazon, Google and Microsoft control 65% of the $237bn market) is a substantial advantage in this space.  As AI expands the digital ecosystem, the cloud market expands, but equally well ‘viral’ AI systems or embedding AI in global products and services comes with challenges: cost of delivery, and ability to add more power to support more queries.  The bigger LLMs are currently either developed by or invested in by a major cloud provider. 

Why the big players are in a dominant position

Building and running AI models like GPT-3 or -4 was initially thought to be inherently monopolistic, due to the costs required to build and train a model.  OpenAI took in $10bn in funding before releasing ChatGPT, and Anthropic is currently looking to raise $5bn with a 4 year plan to take on OpenAI.  In addition the models were only in a few hands, meaning limited freedoms, access controls and a certain homogeneity of service delivery.  

Then the leak of Meta’s LLama on 4chan has opened up the possibility of open source AI on a ‘beefy’ computer; and the possibility of cheap and unrestricted AI for the masses.    There has been a lot of hype, driven by a leaked Google memo written by an in-house engineer:

We Have No Moat. And neither does OpenAI

We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.

I’m talking, of course, about open source. Plainly put, they are lapping us.”

Leaked Google memo, 4th May 2023

However it’s likely that open source remains a fringe pursuit, and Microsoft and Google (and maybe) Amazon stay out ahead in this race.  The reasons:


There’s only a certain number of engineers who invest time in working on open source projects.  Popularity of open source projects is subject to swings over time.  It’s unlikely open source AI development keeps pace with enterprise AI investment over time; and it’s unlikely that bug maintenance or user upgrades are on the same par.


Likely regulation of the space makes it hard for open source models to be viable, partially for the reasons articulated above.  

Focus on regulation of AI is growing sharply, with the EU just publishing an initial draft of its AI act proposing a ban on mass biometric surveillance and predictive policing, and Italy initially banning ChatGPT. 

At a Senate hearing in May, Sam Altman, CEO of OpenAI advocated for far-reaching and standalone regulation in space, including liability for companies producing AI models and governmental scrutiny of models, suggestions which were warmly received at the hearing.  It’s been suggested that he was pushing for regulation as an anti-competitive move.  

“Regulation invariably favours incumbents and can stifle innovation,” Emad Mostaque, founder and CEO of Stability AI, told The Verge.  Clem Delangue, CEO of AI startup Hugging Face, tweeted a similar reaction [to the suggestion that LLMs are licensed]: “Requiring a license to train models would be like requiring a license to write code. IMO, it would further concentrate power in the hands of a few & drastically slow down progress, fairness & transparency.”

“The Senate’s hearing on AI regulation was dangerously friendly”, The Verge


They used to say “Nobody gets fired for going with IBM”; the modern echo might be “Nobody gets fired for going with OpenAI”

‘The Leaked Google Memo and OpenAI’s moats’, The Cognitive Revolution

Security, data leaks, inappropriate content are already major AI risks, even when using Bing or ChatGPT.  Those risks proliferate with less robust, less constrained, less maintained models (cheaper, or open source).  It’s likely many companies will decide that who they get into bed with matters, especially if it’s consumer facing.

None of this prevents the commoditization of AI but it will restrict the spread of ‘free’ or fragmented AI.  

Other reasons Microsoft and Google are out ahead


OpenAI has arguably led the market on price cuts, with a ~97% reduction over the last nine months. There’s no reason to think they’re done. At $2 per million tokens, how much do you really stand to save with open source? Open source can only undercut so much.

By being so aggressive on price so quickly, OpenAI is making it extremely hard for would-be LLM utility competitors to ever make a profit.

‘The Leaked Google Memo and OpenAI’s moats’, The Cognitive Revolution

OpenAI is currently pricing so competitively that there’s limited value to capture for other players / entrants to the space.  If you add up: 1) the costs of training a high quality model (as opposed to a cheaper one), 2) the possible future costs of training data for models, and 3) the time to pay back on a low pricing model, you can see competition becoming less and less attractive. 

Additionally it’s hard to see how these investment tiers and payback horizons would be viable for a player not aligned with Big Tech: as entering this area is undoubtedly assisted by having cloud real estate, payback patience, an existing ecosystem or an adjunct forest of high margin products and cash on hand to make this worthwhile.   Think Android / iOS. 

First mover advantage

OpenAI were first to market with a successful chatbot (though not the first overall), and have gained a significant first mover advantage due to the speed of adoption of the product.  OpenAI is currently the iPhone of the AI race: desirable, beautiful, easy to use:

What I think changed the point of inflection is how ready users were. It’s almost like that moment where you realize… Because these technologies have pitfalls, they have gaps, but you realize you’re at a moment in time where people are ready to use it. They understand it, and they’re adapting to it.

Sundar Pichai, Google and Alphabet CEO, Decoder Exclusive: Google’s Sundar Pichai talks Search, AI, and dancing with Microsoft

Both OpenAI and Microsoft have capitalised on this advantage, shipping and trailing products at pace.  To name just a few: OpenAI has released GPT-4, released plug-ins, announced enterprise instances for AI applications, released ChatGPT Plus and an iOS app that supports voice applications; Microsoft has announced the new Bing, Bing for Edge, Azure AI and Co-Pilot for Microsoft 365.  This has forced the rest of the market to scramble to catch up:

[Describing Google’s I/O, where they introduced their new Search experience, Duet and other AI supported products] The vibe of the presentation felt like a forced smile, or trying way too hard to be excited by what would have been exciting four months ago. Yes, those are amazing abilities you’re highlighting, except I already have most of them.

“AI#11: In Search of a Moat”, Zvi Moshovitz, Don’t Worry About the Vase

The battleground on search, productivity apps and cloud

For most of us, Google and Microsoft are the web.  It’s the doorway the majority of us walk through to enter the digital world.  We fire up Chrome or Edge on our browser, we search using Google or Bing, we schedule our lives, write our documents and reply to our emails in Workspace or 365. The primary way AI is entering our lives aside from the direct portal of ChatGPT, is via:

  1. Search – a market 93% owned globally by Google (Google) and Microsoft (Bing) , supported by choice of browser – a market owned 84% globally by Google (Chrome) and Apple (Safari)
  2. Enterprise productivity softwares, such as Workspace, and 365 – a market 98% owned by Google and Microsoft.  
  3. Cloud: the ecosystem powering AI applications

Combined, these form the current AI battleground, with functionality like AI embedded in ad distribution and creation becoming pay to play functionality.

The current AI platform wars and where they're taking place
The current AI battleground


Reddit user quote describing their switch to ChatGPT over Google

Google has a business model problem.  It’s not in Google’s interest to make search work too well, as then it loses searches and thus advertiser revenue.  But fundamentally users want the right answer to their query, at speed, without browsing through content farms, scrolling down yards of page to discover the menu at the bottom wasn’t what they wanted anyway, having to rephrase their search query, or being forced to click on an irrelevant ad.  However it’s Google’s 85% share of the global search market which allows it to dominate advertiser revenues, and should users migrate search engine, Google’s ad revenue will go down.  Catch 22.  

“…to think about what this means for Google and the idea of ‘generative search’ – what kind of questions are you asking? How many Google queries are searches for something specific, and how many are actually requests for an answer that could be generated dynamically, and with what kinds of precision? If you ask a librarian a question, do you ask them where the atlas is or ask them to tell you the longest river in South America?”

“ChatGPT and the Imagenet moment”, Ben Evans

On the other hand, Microsoft’s advertiser revenues are so (comparatively) low, that should they capture any of Google’s share, it’s worth the risk.  It has two opportunities to do so: a) through ChatGPT’s future plug in ecosystem, b) new Bing, which comes with an embedded chatbot based on ChatGPT.  

ChatGPT have been rolling out plug-ins in beta; allowing users the option to connect ChatGPT via Zapier, book holidays via Expedia or do their shopping via Instacart.  This raises the possibility of a future ChatGPT plug-in ‘app’ store, allowing users to do all their browsing and buying within the bot.

In addition Microsoft have released the new Bing, underpinned by ChatGPT, offering users a secondary ChatGPT supported search experience. 

Google is currently trying to navigate this stand off via a redesign of search, unveiling a new ‘Search Generative Experience’ at this year’s Google I/O

“The single most visited page on the internet is undergoing its most radical change in 25 years. 

On Wednesday, Google introduced a major overhaul of its search results page that infuses the screen with AI. Called the Search Generative Experience (SGE), the new interface makes it so that when you type a query into the search box, the so-called “10 blue links” that we’re all familiar with appear for only a brief moment before being pushed off the page by a colorful new shade with AI-generated information. The shade pushes the rest of Google’s links far down the page you’re looking at — and when I say far, I mean almost entirely off the screen.”

“Google wants you to forget the 10 blue links”, The Verge

Google's new Search Generative Experience
Google’s new Search Generative Experience

However Google’s problem is also a lot of tech companies’ problem, since the market has grown to accommodate its host.  All web based tech firms live within a Google ecosystem, and abide by its reward system as a crucial conduit for traffic.  Since 85% of the world’s search traffic goes through Google, competing for Google click rewards has long been an established traffic strategy.

The immediate response among much of the tech community and among many marketeers to the SGE demonstration at Google I/O  was nervousness. 


Those in the fearful camp worried that the change would result in

  1. Significant loss of organic search traffic: Users get the answer direct via SGE (Search Generative Experience) in the SERP (Search Engine Result Page) and cease exploring / don’t click and come to site. Additional loss of visibility by being pushed down the page – each loss of position on SERPs is associated with diminishing clicks and impression
  2. …resulting in a need to buy more ads for visibility
  3. …resulting in higher customer acquisition costs and reduced growth for available budgets
  4. Site content being harvested to provide a better result on the SERP (search engine results page) without citation, disincentivising future content production and leading to a situation by which AI is ‘eating itself’.

Others were less concerned.  The counter view was that this was an evolution, and might even be a positive progression:

  1. Google has been pushing organic search results down the page for some time to make more room for ads (for example, with its featured snippet feature) → this is a further evolution, but far from a seismic event
  2. This might only strip out poor quality, low intent, high bounce traffic from websites since they now never make it to site
    1. The chatbot will answer simple queries, but in depth search queries will still land onsite
    2. For example: if we’re happy with a persuasive and averaged out 300 word answer to the question ‘What do Product Teams Do?’ then ChatGPT, Bing and Google should be able to do it.  But if we’re looking for a more detailed explanation, then there’s still a home for articles like these.  
  3. Options may emerge for revenue sharing arrangements (similar to Spotify) between search giants and content producers
  4. A reward system might be implemented for higher quality, human generated content, to avoid cannibalisation and drive clicks away from content farms
  5. Should Bing take more share from Google, this might drive down the cost of ads via competition (this is mainly the dream Satya Nadella is selling)

Nonetheless the battle over search represents a very real threat to website traffic for companies and their costs of customer acquisition.  Rising costs of acquisition, in a tech ecosystem under funding pressures and with slim margins could have knock on effects within tech companies.  This is far from a marketing only problem.

Cloud: Microsoft taking share via OpenAI & Azure AI

We had to evolve Azure to have specialized AI infrastructure on which OpenAI is built. And by the way, Inception and Character.ai are also using Azure. There will be many others who will use Azure infrastructure. So we are very excited about that part. And then, of course, we get to incorporate these large models inside of our products and make those large models available as Azure AI. And in all of this, we have both an investment return and a commercial return.

“Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why”, The Verge

Microsoft’s 2019 $1bn investment in OpenAI came with the caveat that Azure had to be OpenAI’s exclusive cloud service provider. Access to a cloud computing platform which is sizable enough to handle the compute requirements of big models at a preferential cost structure is a significant advantage in the space. Since distribution and monetization of AI involves giving ever increasing numbers of users access to models, it’s additionally useful to have computational power which can scale up with user demand.  This was both a smart move in light of their OpenAI investment, and mutually beneficial.

Cloud services growth has been trending down over the last year, closely correlated to the tech slow down and cost cutting within the industry.  Amazon is particularly affected, reporting 11% YoY growth in Q1 2023, compared to Google Cloud’s 28% increase and Microsoft Azure’s 31% YoY increase. 

“Azure took share, as customers continue to choose our ubiquitous computing fabric – from cloud to edge, especially as every application becomes AI-powered.

We have the most powerful AI infrastructure, and it’s being used by our partner OpenAI, as well as NVIDIA, and leading AI startups like Adept and Inflection to train large models.

Our Azure OpenAI Service brings together advanced models, including ChatGPT and GPT-4, with the enterprise capabilities of Azure.

From Coursera and Grammarly, to Mercedes-Benz and Shell, we now have more than 2,500 Azure OpenAI Service customers, up 10X quarter-over-quarter.

….More broadly, we continue to see the world’s largest enterprises migrate key workloads to our cloud.”

Satya Nadella, CEO of Microsoft, Microsoft Fiscal Year 2023 Third Quarter Earnings Conference Call

However, don’t assume Amazon is dead: not only are they partnered with Hugging Face, an open source AI community, but they have released Bedrock

“a fully managed service that makes FMs [foundational models] from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With Bedrock’s serverless experience, you can get started quickly, privately customize FMs with your own data, and easily integrate and deploy them into your applications using the AWS tools and capabilities you are familiar with.”  

Amazon Bedrock, aws.amazon.com

Models available through Bedrock include Anthropic and Stability AI

Additionally the cloud computing market is long term expanding and Amazon is a chip manufacturer in their own right.  Google also manufacture chips, and there’s rumours that Microsoft might be gearing up to do the same.  

Chip improvements are behind the incredible increase in computing power and memory function that has allowed technology to advance to where it is today. 1956 to 2015, computing power increased one trillion-fold, thanks to chips. Think about this: the computer that navigated the Apollo missions to the moon was about twice as powerful as a Nintendo console. It had 32.768 bits of Random Access Memory (RAM) and 589.824 bits of Read Only Memory (ROM). A modern smartphone has around 100,000 times as much processing power, with about a million times more RAM and seven million times more ROM.

“The basics of microchips”, asml.com

Expansion of the digital landscape and expansion of the computing market is only possible via an increase in volume and efficiency of chips.  Ultimately as Ben Thompson notes, the biggest winners of all might be the chip manufacturers.  

Productivity apps and enterprise instances

“AI-powered LLMs are trained on a large but limited corpus of data. The key to unlocking productivity in business lies in connecting LLMs to your business data — in a secure, compliant, privacy-preserving way….This means it generates answers anchored in your business content — your documents, emails, calendar, chats, meetings, contacts and other business data — and combines them with your working context — the meeting you’re in now, the email exchanges you’ve had on a topic, the chat conversations you had last week — to deliver accurate, relevant, contextual responses.”

Microsoft Blog, ‘Introducing your CoPilot for Work’

In our previous article we wrote about enterprise privacy concerns and ChatGPT.  However, the issue isn’t that enterprises don’t want to use the tool; it’s the lack of secure, private instances in which to process their data, and the lack of enterprise controlled tooling in which employees conduct their work.  Enterprise applications for LLMs are significant: from data analysis, coding assistants, writing assistants, information synthesis, automation of manual language tasks and more.  

As a result OpenAI announced on 25th April that they will be building a business version allowing more control over data for enterprises. Microsoft, OpenAI’s largest investor, with the sole licence for its products, have announced Copilot; or their vision for how they are building embedded AI assistants within Microsoft 365 and Edge.  In Word, Powerpoint, Excel and others you’ll be able to prompt ChatGPT to draft or deliver tasks for you. Examples include V1 powerpoint presentations or data visualisations. 

Not only that but Microsoft have some compelling data to support their vision: Github is owned by Microsoft and Github Copilot has been available since June 2022.  When releasing Co-Pilot for 365, Microsoft referenced their findings regarding the impact of Co-Pilot on Github users, with 88% of respondents saying they perceived themselves as more productive when using Co-Pilot.  Google’s comparable product to Co-Pilot for 365 is called Duet and is similarly rolling out in the coming months. 

In terms of the Microsoft / Google battle, the significant thing here is that Microsoft are already in market, and are known for having a highly effective distribution engine: one of the ways in which they have gained traction in search over the last 5 years (prior to OpenAI products) was via enterprise contracts for Microsoft operating systems and services.  

Statista, Microsoft Teams powers past Slack


It’s noisy out there, but we’re in the frenzy phase of a major platform shift in the internet epoch. Expect Google, Microsoft, possibly Amazon, and possibly Apple to duke it out in a major clash of the titans.  This will have effects on the day to day work of nearly everyone in technology, since most of us have a dependency on search, use productivity apps and have our sites run on cloud.  

The next decade is AI first from a Big Tech, platform perspective; and we can expect to see evolution in business models, services offered and revenue streams.  But it will take some time to be all pervasive, and as with mobile, AI won’t be the answer to everything. Nonetheless, somewhere, amid all the companies claiming to beat OpenAI at their own game built on GPT-4, there’s the seed of the next Amazon or Tiktok. We’ll know in 5-8 years, and then in 10-15.

Where AI is on the mobile adoption curve
“Mobile is Eating the World”, Ben Evans



How do OpenAI make money?

OpenAI makes money via paid for products such as ChatGPT Plus, a premium subscription to ChatGPT, and delivering paid for services via APIs or web interfaces via a credit / token basis.  For example, requesting one image from DALL-E, an Open AI image generating product costs 1 credit.   OpenAI prices its products and services at a low price point, due to their belief in a commoditized AI future, and in order to achieve customer scale.

Why is Microsoft AI exciting?

Embedded AI in Microsoft productivity software, such as 365, is exciting because it offers the chance to remove administrative and time consuming repetitive tasks from day to day work life.  Examples include a first draft of a research paper, a first draft of a power point presentation or speeding up data analysis, without time spent laboriously in Excel on minor cleaning and coding tasks. 

This article is part of a series on OpenAI and generative AI products:

  1. How to Use ChatGPT: Understanding ChatGPT and OpenAIhere
  2. The AI Platform shift → this one. 
  3. Using ChatGPT and generative AI in product development → [coming soon] 
  4. Prompting ChatGPT → [coming soon] 
  5. Case studies: how tech companies are using ChatGPT → [coming soon] 

Note: this article was authored in early May 2023.  This is a fast moving space and while we will endeavour to keep this up to date, out of date statements are inevitable. Please keep us honest at contact@hustlebadger.com

No nonsense advice

Proven guides, templates and case studies for product managers into your inbox each week