Reality itself doesn't know whether AI is a bubble. Or, to be more precise: whether a "burst-like event"[1] will happen or not is - in all likelihood, as far as I'm concerned - not entirely determined at this point in time. If we were to "re-run reality" a million times starting today, we'd probably find something that looks like a bursting bubble in some percentage of these and nothing that looks like a bursting bubble in some other percentage - and the rest would be cases where people disagree even in hindsight whether a bubble did burst or not.[2]
When people discuss whether AI is a bubble, they often frame this (whether deliberately or not) as a question about the current state of reality. As if you could just go out into the world and do some measurements, and if you find out "yep, it's a bubble", then you know for sure that this bubble must pop eventually.[3] And while there certainly are ways to measure properties of bubbliness of different parts of the economy, it could well be that what looks like a bubble today may either slowly "deflate" rather than burst, or reality around it catches up eventually, justifying the previously high valuations.
Uncertainty is sometimes conceptually split into two parts: epistemic (our limited knowledge) and aleatory (fundamental uncertainty in reality itself). My claim here is basically just that, when it comes to bubbles bursting in the future, the aleatory component is not 0, and we shouldn't treat it as such. In other words, there is an upper limit in how certain a rational person can become at any point in time on whether an AI bubble burst event will occur or not. Sadly, knowing where that limit is is in itself uncertain, which makes all of this not very actionable. Still, it seems important[4] to acknowledge that we can't just expect that doing any amount of research today will lead to certainty on such questions, as reality itself probably isn't fully certain on the question at hand.
Ultimately, whether any burst-l