This past December, I wrote a paper I’m rather excited to get some feedback on. About a week ago, after nearly a month in editorial triage over the holiday season, Classical and Quantum Gravity decided to send it out for review. The paper (a copy of which I’ve posted below) purports to prove decisively that a whole host of problems related to black holes—the existence of singularities, naked or otherwise, Hawking radiation and information loss, and really the basic astrophysical nature of black holes themselves—have all been badly misunderstood, and that the so-called ‘problems’ aren’t really physical problems after all. What it proves is that in a universe populated with things like galaxies and stars and nebulae like ours currently has, black holes formed through gravitational collapse must remain larger than, and asymptotically contracting towards, their theoretical event horizons.
This is a big deal for a few reasons. To start, CQG is the world leading journal in classical relativity and quantum gravity. Their editors do not decide lightly to entertain obvious bullshit claims that standard paradigms everyone believes to be true are actually straightforwardly false. That just doesn’t happen.
Obviously, I don’t actually know what’s going on behind the scenes. But I am reasonably assured that a major challenge like this will not have been casually passed to review without being read by more than one editor, and without consulting someone from the Executive Editorial Board and the Editor-in-Chief, and potentially at least one person from the journal’s Advisory Panel.
A review of these lists shows that these are the top people in the world in the areas of classical and quantum gravity—areas that will be significantly impacted by this paper, if it holds up to scrutiny. And, importantly, every person who looked at the paper will have been asked to identify any errors in the paper. And the journal would not have sent it out for review if they had been able to identify a substantive problem with my argument.
That’s really the most remarkable thing about this: if anyone on the Editorial Board had been able to identify an error in the paper, it should have been rejected for that reason. In that case, I should have received an email stating that the paper was not sent for review due to some false claim or invalid move. The journal would not trouble volunteer referees with a paper they know is wrong.
This doesn’t mean they believe it’s right. And it certainly doesn’t mean they like it. But it does mean that several of the best people in the world to judge the paper’s argument have read the paper and acknowledged that they could not see an error.
For example, it seems unlikely that a paper purporting to prove that we’ve been wrong about black holes all this time would be sent for review without approval from the journal’s Editor-in-Chief, Susan Scott. And it seems unlikely that would have happened if she’d identified an error. If she read the paper and did not see an error in it, that would be a very big deal—because this is her area, and she knows her stuff.
Personally, I think that already means the paper should no longer be kept in private with members of CQG’s editorial board and the referees they’ve sent it to. It should be shared publicly where all physicists can see it and have an opportunity to critique.
Because that’s how science works.
Because, “Oh shit, we’ve actually been wrong for decades about what black holes really are; this means Penrose’s singularity theorem, while technically valid, is actually unphysical” is not a conclusion that should be left to a handful of editors and a couple of referees to judge in private.
Even if they are the best of the best, would they not appreciate the paper existing publicly so that someone somewhere might identify some sleight-of-hand buried within my analysis? I know I’d sure appreciate such a sanity check if I were the one reviewing a potentially consequential paper like this. If that were me, and I knew that Susan Scott and some others on her Editorial and Advisory Boards weren’t able to identify an error in the paper, and I was asked to provide the last line of defense in deciding whether the paper lives or dies, I know I’d appreciate the existence of a publicly available preprint of this paper, so I might catch wind of anyone having identified an error in it.
… which brings me to physics preprint arXiv
Ostensibly, in physics we are supposed to have such a preprint archive whose mission is to
“provide an open research sharing platform where scholars can share and discover new, relevant, and emerging science, and establish their contribution to advancing research.”
This preprint archive explicitly does not endorse any of the content published on the site, and while it does employ a moderation process the job of moderators is supposed to be only to ensure that papers meet the minimum standards of scholarly rigour.
In practice, however, arXiv.org’s moderators routinely reject papers that meet its published minimum scholarly standards, providing no explanation, and enforcing a stronger bar in which moderators may require prior peer-review publication before considering any appeal — and even then they reserve the right to reject.
Now, consider my paper. I actually initially submitted it to The Astrophysical Journal Letters, where it was handled by a senior astrophysicist with extensive experience in compact-object theory and gravitational dynamics. About a week later, he responded that while he personally found the paper’s subject-matter interesting it was outside the scope of ApJL, which focuses primarily on observational astronomy, and he recommended expanding it and submitting to a physics journal. So a couple of days later, after making some revisions, I submitted to Classical and Quantum Gravity, the top physics journal whose scope my paper falls squarely into.
That was on December 15. And while the holidays were approaching, there should still have been more than enough time to send a quick desk rejection before Christmas if the assigned handling editor had identified a problem.
But that didn’t happen. And then I didn’t hear back after Christmas either. Nor after the New Year. And then the winter term started and I made it through my first week of classes without an update, and I started to think, “Wow, they may really be treating me seriously.”
Then on January 9, the day before my birthday, I checked the article tracking system and saw that at last, my paper had been sent out to reviewers!!
I’d long since decided not to bother trying to submit the paper to arXiv.org because of their moderation practices which I now believe have become structurally incompatible with their stated mission, vision, and values.
From past experience, I knew what the outcome would be. For instance, in November I submitted this paper to the arXiv. It presents a straightforward, mathematical coincidence that could lead to silent, insidious bias in cosmological modelling if mishandled in computational pipelines. The paper shows that a certain quantity in perturbation theory must be normalised by a function of cosmological parameters which evaluates identically to unity in the case where curvature is zero, energy densities are dominated by dark matter and dark energy, and the density parameter values are precisely those constrained by Planck’s CMB data, i.e. ΩM = 1 – ΩΛ = 0.315.
This result should be deeply disconcerting for computational cosmology. A normalisation factor, which models are supposed to be divided by, just happens to equal unity precisely at the parameter values that our best models have constrained. If those models were to unintentionally drop this function, that would bias them towards precisely the parameter set that our models actually have constrained.
And what’s worse is that this special function whose value is precisely unity at ΩM = 1 – ΩΛ = 0.315 is actually pretty generic in form—i.e. it could actually show up all over the place in analytic cosmology, not just where it showed up in my calculations, and could bias models in completely different ways if mishandled.
So I wrote up a short note to explain the potential issue. It contained no claim that anything had actually been mishandled, but merely noted that this normalisation function evaluates to unity coincidentally precisely at the concordance model parameters, and that this is something that computational cosmologists should look out for, and that pipelines should perhaps be checked to ensure they don’t mishandle normalisation.
As I said, it’s a straightforward mathematical result, not a claim of anything significant at all, but an important fact to be aware of as it does have potential to bias our models so they’d constrain precisely the set of parameter values they’ve been constrained to.
This paper should have been immediately published on the arXiv, as it presents nothing but a straightforward mathematical result that can be easily checked and could lead to significant errors in astrophysical models.
And yet, I received my typical rejection letter: our moderators decided your paper does not meet our standards; they’ll only reconsider if this paper is first published in a peer reviewed journal, and even then we still reserve the right to not publish.
That’s just been the way of it for me, for a while now.
But by Sunday, January 11, I got thinking that maybe now that my black holes paper had passed actual professional moderation at the top journal in the field, there should be absolutely no justification, according to arXiv.org’s published standards, to refuse to publish.
The bar that editors at CQG set for deciding whether or not a paper is serious enough, and lacks errors they can discern that would enable them to desk-reject and not bother referees, must surely be greater than the bar at arXiv.org. After all, this is the physics preprint server that claims its mission is to foster open research sharing, where scholars can share and discover new, relevant, and emerging science. This is a preprint archive that claims to moderate only to ensure that minimum scholarly standards have been met. This is a preprint archive that states explicitly that they do not endorse their publications, so they would not be held accountable to the claims I’ve made in the paper. This is the preprint archive where any serious claim in physics should be published so that the field can judge its merit independently of whether one or two selected ‘referees’ have decided that they can’t work out how it might be wrong—where the entire field could help those referees and potentially identify an error they might miss.
I was reasonably confident I knew what the outcome would be. After they had refused to publish the last paper, there seemed little chance they would allow this one through — even with an explicit note stating that it was “currently under review at Classical and Quantum Gravity.”
And I was right. First thing Monday morning, I received an email informing me that my paper would not be published, and that it would only be reconsidered if it were first accepted by a peer-reviewed journal — and even then, arXiv reserved the right to refuse it.
I sat with that confirmation for a few days. But I could not shake the sense that something deeper was wrong. Two papers that plainly met arXiv’s own published minimum standards had been rejected without explanation. In both cases, the remedy offered was effectively the same: publish first elsewhere, and even then there would be no guarantee of visibility.
So I decided to stop guessing and put my concerns on the record. I wrote to arXiv’s Executive Director, Ramin Zabih, and to the Simons Foundation leadership as arXiv’s major funder, outlining what had happened and why I believed these decisions were incompatible with arXiv’s stated mission.
Ramin actually responded a couple of days later — and his reply clarified something far more troubling than I had previously understood.
He explained that arXiv moderation decisions are not made solely on the basis of the neutral, minimal criteria posted on their website. Instead, submissions are evaluated according to the scientific opinions of volunteer domain experts — experts who are heavily overused, operating under severe time pressure, and collectively processing roughly a thousand submissions per day.
This distinction is crucial.
Moderation based on scientific opinion is not neutral infrastructure. It is substantive epistemic judgment. And it is fundamentally incompatible with the stated role of a preprint archive.
Under this model, rejection does not require that a paper be sloppy, incoherent, off-topic, or non-scholarly. It requires only that a moderator — acting privately, without justification — decides that the work does not look like physics as they understand physics to be. Whether the paper is technically correct, conservatively argued, or already deemed review-worthy by the leading journal in its field is irrelevant.
The contrast between arXiv’s publicly stated and privately employed moderation practices also creates a powerful illusion: that papers which do not appear on arXiv are illegitimate. In reality, they may simply have failed an unpublished test of aesthetic or paradigmatic conformity. Yet because many journals now expect arXiv posting — and some, including APS journals, actively prefer submission via arXiv identifiers — exclusion at this stage quietly but severely disadvantages authors, regardless of merit.
My decision to email the Simons Foundation and arXiv.org’s leadership happened in part because that same week I came across an interesting and surprisingly relevant interview from last spring with Nobel Laureate Gerard ‘t Hooft, where he said,
“the real reason there’s nothing new coming is that everybody’s thinking the same way!
“I’m a bit puzzled and disappointed about this problem. Many people continue to think the same way—and the way people now try to introduce new theories doesn’t seem to work as well. We have lots of new theories about quantum gravity, about statistical physics, about the universe and cosmology, but they’re not really “new” in their basic structure. People don’t seem to want to make the daring new steps that I think are really necessary. For instance, we see everybody sending their new ideas first to the preprint server arXiv.org and then to the journals to have them published. And in arXiv.org, you see thousands of papers coming in every year, and none of them really has this great, bright, new, fine kind of insight that changes things. There are insights, of course, but not the ones that are needed to make a basic new breakthrough in our field.
“I think we have to start thinking in a different way. And I have always had the attitude that I was thinking in a different way. Particularly in the 1970s, there was a very efficient way of making further progress: think differently than your friends, and then you find something new!
“I think that is still true. Now, however, I’m getting old and am no longer getting brilliant new ideas every week. But in principle, there are ways—in, one could argue, quantum mechanics, cosmology, biology—that are not the conventional ways of looking at things. And to my mind, people think in ways that are not novel enough.”
This complaint seemed to stand in stark contrast with my own experience, which already indicated that the true reason is not a lack of originality and diversity in the ways people today are looking at and thinking about things, but that arXiv.org is silently suppressing such avenues of thought.
And the structure revealed by Ramin’s admission, that moderation depends on the scientific opinions of the domain experts, confirmed that the homogeneity ‘t Hooft was lamenting may very well not be natural at all. It may be manufactured.
When visibility itself depends on whether new ideas resemble existing ones closely enough to pass through unpublished opinion filters, genuinely different approaches never reach the community. They are filtered out before disagreement can even occur.
Throughout the history of science, domain experts have repeatedly proven to be the poorest judges of challenges to accepted views. Einstein’s 1905 paper on special relativity would never have survived moderation based on contemporary scientific opinion. Einstein himself wrote a damning note claiming that Friedmann’s 1922 cosmological model was wrong, so Friedmann had to go to great lengths to explaining the validity of his calculation; and, famously, he reacted to Lemaître’s 1927 paper by asserting that while the math was correct the physical insight was abominable. These papers now form the bedrock of modern cosmology. Cecilia Payne’s discovery that the Sun is composed primarily of hydrogen and helium was initially dismissed as obviously wrong. If Principia had required Cassini’s approval, it would never have existed. If the geocentrists had prevailed, Dialogo would have been silenced and Two New Sciences would never have been written.
The list goes on and on. This is how scientists are. Kuhn diagnosed the issue. It’s philosophy of science 101: Experts trained within a paradigm cannot neutrally evaluate challenges to that paradigm.
A preprint archive is not a journal. Its role is not to determine whether claims are correct, persuasive, or compatible with prevailing understanding. Its role is only to determine whether a submission represents a good-faith attempt by a trained participant to contribute to an ongoing disciplinary conversation.
The moment moderation decisions are made on the basis of scientific opinion rather than minimal criteria of relevance and competence, the archive ceases to function as neutral infrastructure and becomes a pre-public screen for paradigm conformity.
The problem is not expertise. It is the replacement of public argument with private opinion rendered in haste and without justification. Scientific disagreement is resolved through reasons that can be examined, contested, and corrected — not by licensing authority the ability to extinguish challenges before they can be seen. When decisions are made without transparency or justification, there is no mechanism by which bias can be identified, error corrected, or disagreement meaningfully addressed.
At that point, moderation becomes equivalent in epistemic effect to censorship, regardless of intent. And science fails not when ideas are wrong, but when systems prevent wrong ideas from being argued in public.
If arXiv is to serve its stated mission of advancing open scientific communication — and if it is to align with the Simons Foundation’s commitment to asking big questions as we work to unravel the mysteries of the universe; to make space for scientific discovery — it cannot simultaneously function as a hidden, prejudicial filtration layer that decides which ideas are permitted to exist. Discovery has never occurred through private adjudication. It has always occurred in public.
That more or less sums up the story that’s unfolded over the past couple of months.
Now: what’s actually in this paper that I’m all fussed about?
My black holes paper
Well, here’s a copy of the paper that you can read for yourself. This isn’t the version I submitted to Classical and Quantum Gravity in December, but contains some clarifications, e.g. about the obvious implications for trapped surfaces, some framing about the conservative methodology used in the paper, and one additional section (3.4) that closes the loop on the basic structure of the proof in sections 3.1-3.3. This is the version I submitted to the physics arXiv. I’ll summarise it below.
The paper presents a remarkably straightforward and technically conservative case. Following an introduction that outlines the present view of black holes and the situation with observed mergers and the numerical models that describe them, it first of all constrains the scope to black holes existing in the ‘present’ universe—by which I simply mean the universe during its epoch of galaxies, while stars and nebulae still exist, before everything dynamically freezes out in the far distant future.
The next section is the logical hinge of the entire argument. I prove a basic causality theorem that shows that when a binary black hole merger or accretion event is observed (or, more to the point, if such events are to be even theoretically observable), the black holes involved must have been still larger than their associated event horizon radii at the time of the merger. It can’t be otherwise, because black holes themselves can’t causally influence anything in the outside universe, and any light-like signal such as a photon or a gravitational wave that we could ever see coming from a black hole must have been emitted while the gravitationally collapsing matter was still larger than its event horizon—i.e. when it was still just a superdense thing, not a black hole with an actual formed event horizon.
I also note that this is actually entirely consistent with the numerical models we use to describe the signals instruments like LIGO detect from black hole mergers. These models propagate gravitational waves arising within the causally accessible region outside the event horizon, which we detect a while later at our location further away, due to the finite speed of light.
The next step in the proof is then to note that when two black holes that are still larger than their event horizons, or really any particles that accrete onto a single collapsing star fall onto it, that changes the matter world tube. When this happens, the world tube itself must be updated; it doesn’t just keep on collapsing as it was, causally generating the same external space-time it had done before.
You see, general relativity is a local theory of gravitation. What this means is that the space-time induced by stress-energy content arises causally from the matter world tube itself, the information about what the thing is that’s generating the space-time curvature propagating away from it at the speed of light. And in general relativity, this is all invariant structure—it doesn’t depend on whatever coordinate system is used. So, basically, when the world tube updates, that means the whole region of space-time that occurs to the future of the light cone generated at the collision event has to be replaced by the space-time generated by the new collapsing worldtube, not the old, pre-collision/accretion one.
You can understand this by thinking about a flash of light that goes out from the black hole in all directions at the moment of the event. It propagates away at the speed of light, and carries with it information that the space-time curvature outside has to update because the generating matter has changed.
This all means we really can’t just arbitrarily and carelessly extend the original worldtube of the collapsing matter or the space-time structure generated by it. The whole thing has to be cut off and updated along that outgoing light cone so it reflects the updated worldtube and space-time curvature it generates.
The next section tidies things up by then noting that “present black holes” are things that will undergo merger and accretion events far into the future, with every such event anchoring the black hole to being still larger than its event horizon (since visible signals from mergers and accretion could never be observed, otherwise) when the event occurs. And every time that happens, the whole external space-time has to be updated to the future of the light ray generator of space-time curvature.
But then in a universe with ongoing accretion and merger events occurring, which will continue far into the future as black holes continue to exist within galaxies where they’ll interact with dust and other matter, as all the matter slowly radiates away energy and spirals into galactic cores, there is no realistic sense in which one can claim that any present black hole has participated in every interaction event it will ever participate in. There will, for a very long time, still be more such events in every present astrophysical black hole’s future. And that means that every present astrophysical black hole must still be larger than its event horizon radius—i.e. that no event horizon has yet formed, and matter has not collapsed to a singularity inside any present astrophysical black hole. Instead, they all must still be asymptotically approaching their theoretical horizon radii.
Section 3.4 closes the logical loop by showing why this entire result is unavoidable once causal structure is taken seriously. The argument up to this point does not depend fundamentally on the astrophysical details of mergers and accretion. Those simply make the conclusion obvious. The deeper reason is that space-time curvature itself is causally generated by the collapsing matter worldtube.
Every outgoing light ray that constructs the exterior spacetime accessible to observation originates on the collapsing surface before any event horizon could form. No causal signal that reaches the outside universe is ever generated by a completed horizon or interior region. As a result, the exterior spacetime we observe is always anchored to the pre-horizon phase of collapse. There is never a moment when the exterior geometry becomes causally grounded in a completed black hole.
Once this is recognized, the earlier conclusions follow necessarily. Observable mergers must occur while the objects are still larger than their horizon radii. Each interaction replaces the exterior spacetime to its causal future. In a universe with ongoing structure formation, collapse therefore remains asymptotic: present astrophysical black holes never form physically realized event horizons or singular interiors.
With no completed horizon, the conditions required for Hawking radiation are not satisfied. The spacetime remains globally connected, no region becomes permanently inaccessible, and the quantum field never evolves on a background containing a true causal boundary. As a result, horizon-induced Hawking radiation does not arise for astrophysical black holes in the present universe, and the information-loss paradox never occurs.
The conclusion is not that event horizons are mathematically meaningless, but that they belong to auxiliary global extensions that are never physically realised in a universe undergoing continued dynamical evolution. What exists instead are perpetually collapsing ultra-compact systems whose exterior spacetime is continually regenerated by their evolving matter content.
That’s the full proof. It’s pretty neat, and pretty intuitive. And it’s a pretty cool bit of physics that I think will initially bother a lot of physicists whose research has till now been based on the idea that completed black holes are real physical objects when they are not. But once the dust settles, I think people will be pretty fascinated by all the implications!
For one thing, what this proof seems to indicate is that classically, according to general relativity alone, Schwarzschild’s result (along with generalisations by Kerr, Newman and others) implies that matter in our universe cannot become arbitrarily small; that density has to be finite. It says there is a minimum radius that any mass can attain, and that directly depends on how much mass is actually there. That is an incredibly cool thing!
And the other thing that’s really cool is that we know already that when anything falls into a black hole it reaches the horizon in finite proper time. So here we have this situation where the collapse happens asymptotically, approaching this minimum radius ever more slowly, though always progressing to smaller and smaller radii, with this finite limit that’s only attained in the infinite future. But on the other hand, for the star that’s collapsing and everything that ever falls in, the horizon is reached after a finite amount of proper time has passed.
It clearly begs the question, what happens after? Is there an ‘after the infinite future’? Or does something else happen after the end of our universe, when the black holes that settle at the centres of what once were galaxies or clusters of galaxies no longer accrete anything, after they’ve generated all space-time events that ever occur outside of them, and they finally reach their finite radii, will the particles that fall into them and reach those event horizons at the end of time go on existing? In what sense will they do that?

Leave a comment