X has once again done something unusual for a major social network: publish a relevant part of the code behind its “For You” feed. The latest update to xAI’s GitHub repository, dated May 15, 2026, has reignited the debate over how the platform decides which posts reach thousands of users and which ones disappear with barely any traction.

The release does not fully open the black box, but it does reveal an architecture that relies far more heavily on artificial intelligence than on traditional rules. The system, known as Phoenix, combines content from accounts users follow with posts discovered outside their direct network, then ranks them through a model based on Grok’s architecture. X’s public promise is a more transparent feed. The technical reading, however, is more uncomfortable: visibility on the platform increasingly depends on signals that are difficult to observe from the outside.

The repository’s official documentation describes a two-stage system. First, it retrieves candidates from among millions of posts. Then it ranks them with a transformer that predicts probabilities of interaction: likes, replies, reposts, clicks, dwell time, follows, blocks, mutes, reports and signals of disinterest. That combination is later turned into a final score that determines what appears in the feed.

A user is no longer just an account: it is a vector

One of the points that has sparked the most debate is the role of embeddings. In simple terms, an embedding is a mathematical representation of a user, a post or an author. It is not a visible label or a manual marker, but a set of numbers that summarises behavioural patterns: which content a person interacts with, what they ignore, which authors they block, which topics they consume and what kinds of posts usually hold their attention.

Phoenix’s README explains that the system uses a two-tower architecture: one tower encodes the user and the other encodes candidate posts. The result is a similarity search that helps find content matching the user’s interaction history. In practice, this confirms something many creators already suspected: the platform does not evaluate each post from scratch, but within a statistical memory of previous behaviour.

A viral analysis published on X goes further and argues that each account builds up an internal “fingerprint” that can improve or deteriorate over time. According to that interpretation, signals such as “not interested”, blocks, mutes, reports or fast scrolling may contaminate an account’s algorithmic profile for weeks. This has not been officially confirmed by X in those exact terms, but it is consistent with the general logic of modern recommendation systems: history matters, and it does not always disappear just because a creator changes strategy.

The consequence is clear for anyone publishing professional, informative or brand content. It is not enough for a single post to be good in isolation. The system also looks at the author’s history and the accumulated reaction from the audience. A bad run of low interest can make recovery harder, because the algorithm learns that the account generates less value for certain users.

The first few minutes matter more than they seem

Another relevant element is content age. The code includes a constant called POST_AGE_MAX_MINUTES = 4800, equivalent to 80 hours. The system groups post age into one-hour windows and assigns an overflow bucket once that limit is exceeded. This does not mean every post dies exactly after 80 hours, but it does show that freshness is an explicit variable inside the model.

For creators, the practical reading is obvious: X does not behave like YouTube, where content can resurface months later through search or evergreen recommendations. On X, time matters a lot. The first few hours are decisive, and the first few minutes can determine whether a post receives enough interaction to enter wider distribution circuits.

The analysis shared by Javi López insists that the first 30 minutes are critical and that, without enough early response, many posts may not even enter deeper evaluation. That specific claim should be read as an interpretation of the code and observed behaviour, not as an official rule published by X. Even so, it matches the experience of many users: when a post starts badly, it is usually difficult to revive later.

This is where an important difference appears between visible and invisible metrics. Likes matter, but they are not the only signal. The repository documentation includes dwell time, meaning the time a user spends looking at or engaging with a post, among the actions Phoenix can predict. A post that makes people stop, read, open an image or watch a video may send a richer signal than a quick like.

That helps explain why some formats perform better than they appear to. Well-structured long texts, videos that retain attention, images with useful information or short threads with real value can outperform posts that generate many superficial likes but little time spent.

What is visible, and what X has not published

The most sensitive part of the repository lies in what is not there. X has published an architecture, components, filters, documentation and a mini version of Phoenix. But not all production weights, prompts, internal rules or configurations are exposed. The README itself states that the released model is a smaller, frozen version, while the production version of Phoenix uses a larger model continuously trained with real-time data.

That greatly limits any definitive conclusion. The code helps explain the system’s philosophy, but it does not allow anyone to reconstruct precisely how each real feed is ranked at any given time. The specific weights of each signal, safety rules, experiment settings, moderation systems and internal configurations may substantially change the final output.

Care is also needed with the word “shadowban”. The code and documentation do show visibility filters, deduplication, age limits, blocked or muted accounts, deleted content, spam, violence and gore. Negative signals such as blocks, mutes, reports and “not interested” also appear. But claiming that there is a universal, automatic and measurable shadowban for any account requires more evidence than a partial reading of the repository.

What is clear is that X uses a layered distribution system. First, it selects candidates. Then it enriches information. After that, it filters, scores and re-ranks. Finally, it applies post-validation before serving the feed. At each stage, a post can lose opportunities, not because of one single visible penalty, but because of the accumulation of small signals.

For creators, media outlets and brands, this forces a change in habits. Posting five times in a row may mean competing against yourself if the system tries to diversify authors. Spending all day replying to large accounts may generate visibility in specific conversations, but it does not necessarily help amplification outside your network. Repeating content can collide with duplicate filters. And producing text or images that look like low-quality automated content can become a negative signal if classifiers detect it.

X’s algorithm does not reward activity alone. It rewards the probability of useful reaction. And that reaction is no longer measured only through clicks or likes, but through a broad set of positive and negative signals.

X’s partial transparency has value because it allows researchers, creators and advertisers to better understand the logic of the system. But it also confirms that opacity has not disappeared. The skeleton is on GitHub; the fine-tuning remains inside X. That difference matters a lot. In social networks, a small variation in a weight or safety rule can change the fate of thousands of accounts.

The most sensible conclusion is not to look for a magic recipe to “beat the algorithm”, but to understand its incentives. X wants recent content that holds attention, generates positive interaction and does not trigger rejection signals. It wants author variety, fewer duplicates and less low-quality content. And, above all, it wants Grok and Phoenix to learn from every user gesture.

For anyone publishing on X, that leaves a simple recommendation: think less about tricks and more about signals. Post when your audience is awake, take care of the initial launch, avoid spam, do not overuse endless threads, write original posts with real value and measure whether people stay or scroll past. The algorithm is not neutral, but it is not completely invisible either. This time, at least, it has left some clues.

Frequently asked questions

Has X published its entire algorithm?
No. It has published the repository for the “For You” feed recommendation system, with important components and a mini version of Phoenix, but not all production weights, internal rules, prompts or configurations.

What is Phoenix inside X?
Phoenix is the transformer-based recommendation system that retrieves and ranks posts according to probabilities of interaction, using user history, embeddings and content signals.

Are likes the most important metric on X?
Not necessarily. The system also takes into account signals such as replies, reposts, clicks, follows, dwell time, blocks, mutes, reports and “not interested”.

Does the user’s location affect reach?
The published code does not allow us to say that the author’s location directly penalises a post. What can matter is posting outside the active hours of the target audience or using a language that does not match the audience a creator wants to reach.

Scroll to Top