Skip to main content
  1. Blog/

The GPT Store Is Live — What It Means for AI Development

·1030 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
AI Industry & Regulation - This article is part of a series.
Part : This Article

Last week, OpenAI finally launched the GPT Store, and with it, the company is making its most significant platform play yet. After teasing custom GPTs at DevDay back in November, the store is now live for ChatGPT Plus, Team, and Enterprise subscribers. It’s essentially an app store for AI agents — and if that comparison makes you think of both the promise and the pitfalls of Apple’s App Store circa 2008, you’re not alone.

I spent the past week poking around the store, building a couple of custom GPTs myself, and talking to other developers doing the same. Here’s my assessment of where things stand and where they might be heading.

What the GPT Store Actually Is
#

At its core, the GPT Store is a marketplace where anyone with a ChatGPT Plus subscription can publish custom GPTs — specialized versions of ChatGPT configured with specific instructions, knowledge files, and capabilities. These aren’t fine-tuned models; they’re more like sophisticated prompt wrappers with persistent configuration and optional API integrations through what OpenAI calls “Actions.”

The store is organized into categories: DALL·E, Writing, Research, Programming, Education, and Lifestyle. There’s a search function, trending lists, and featured picks curated by OpenAI. The initial catalog already has over three million custom GPTs, which tells you both that the barrier to creation is low and that discovery is going to be a massive challenge.

Building a custom GPT takes minutes if you’re doing something simple — you’re essentially having a conversation with GPT-4 about what you want the bot to do, uploading any reference documents, and optionally connecting external APIs. It’s impressively accessible for non-developers, which is both the point and the concern.

The Developer Angle: Actions and API Integration
#

Where things get interesting for us as developers is the Actions framework. Actions let your custom GPT call external APIs using OpenAPI specifications. You define the endpoints, the authentication method, and the schema, and GPT-4 figures out when and how to call them based on the conversation context.

I built a GPT that integrates with our internal documentation API — essentially a conversational interface over our engineering wiki. The setup was straightforward: export the OpenAPI spec, configure OAuth, and let the GPT handle the rest. It works surprisingly well for lookup-style queries, though it struggles with multi-step workflows that require maintaining state across several API calls.

The real potential here is in vertical applications. A GPT that can query your monitoring stack, cross-reference with your incident database, and suggest runbook steps — that’s genuinely useful. But we’re a long way from reliable implementations of that vision. The context window limitations, the occasional hallucination about API parameters, and the lack of proper error handling make current Actions feel more like prototypes than products.

The Economics: Revenue Sharing Is Coming
#

OpenAI has announced a revenue-sharing program for GPT creators, set to launch in Q1 2024. Details are sparse — they’ve mentioned it’ll be based on “user engagement” — but this is clearly the carrot designed to attract serious developers to the platform.

I’m skeptical about the economics for individual creators. If the App Store taught us anything, it’s that these marketplaces tend toward a winner-take-all dynamic. The top 1% of GPTs will capture the vast majority of engagement, and with three million entries already, standing out is a needle-in-a-haystack problem.

The more interesting economic play is for companies that use custom GPTs as a distribution channel for their existing services. If you’re a SaaS company, wrapping your API in a conversational GPT interface is essentially a new customer acquisition channel — one that lives inside ChatGPT’s massive user base.

The Platform Risk Discussion
#

Let’s talk about the elephant in the room: platform dependency. Building on OpenAI’s platform means accepting that they control the rules, the distribution, and the underlying model. They can change the Terms of Service, adjust the ranking algorithm, or deprecate features at will.

We’ve seen this movie before. Facebook’s app platform, Twitter’s API ecosystem, Slack’s app directory — all went through cycles of openness followed by constraint. I’m not saying OpenAI will follow the same path, but any developer building a business on the GPT Store should have a clear-eyed view of the risks.

The smart play, as always, is to treat the GPT Store as a channel, not a foundation. Build your core logic in your own stack, expose it via APIs, and use the GPT Store as one of several interfaces. If the platform changes, you’ve lost a distribution channel, not your entire product.

What’s Missing
#

A few things I noticed that are conspicuously absent:

Analytics: There’s no dashboard for GPT creators to understand usage patterns, user retention, or conversation quality. You’re publishing into a void.

Version control: There’s no proper versioning system for GPTs. You can edit your GPT, but there’s no way to roll back, maintain multiple versions, or do staged rollouts.

Team collaboration: Building GPTs is currently a solo activity. There’s no way for a team to co-manage a GPT, which limits its usefulness for corporate deployments.

Testing frameworks: There’s no way to systematically test your GPT’s responses before publishing. Given that these are customer-facing products, the lack of QA tooling is a significant gap.

My Take
#

The GPT Store is an important moment in AI development, but not because of what it is today. It’s important because it signals OpenAI’s strategic direction: they want to be the platform, not just the model provider. The store is their bid to create an ecosystem lock-in that goes beyond API access.

For developers, the immediate value is modest. Custom GPTs are useful for internal tools and quick prototypes, but the lack of proper development tooling makes them hard to take seriously for production applications.

The real question is whether OpenAI will invest in the developer experience. If they add proper analytics, testing frameworks, and team features, the GPT Store could become a meaningful platform. If they don’t, it’ll be another app store full of novelty bots that nobody uses after the first week.

I’ll be watching the revenue-sharing details closely. That’ll tell us more about OpenAI’s commitment to the ecosystem than any number of blog posts.

AI Industry & Regulation - This article is part of a series.
Part : This Article