Technical debt does not always come from an obvious bad practice or a last-minute shortcut. Sometimes it comes from a perfectly understandable decision made at the very beginning of a project, when nobody can truly imagine how far the system might grow. That is what Wise has just reminded the industry after its cofounder and CEO, Kristo Käärmann, publicly acknowledged that the company is close to exhausting the 32-bit space used for transfer identifiers, a choice he made in 2010 when he wrote the first lines of code.
The issue, explained simply, is this: a signed 32-bit integer allows up to 2,147,483,647 positive values. A signed 64-bit integer would have pushed that ceiling to 9,223,372,036,854,775,807. Käärmann admitted on X that choosing int instead of long was a shortsighted decision, and he did so with the kind of humor engineers often reserve for the moment when an old architectural choice comes back years later.
What makes the story interesting is not just the technical detail. What matters is that this is happening to Wise because it has reached enormous scale. In its fiscal year 2025, the company moved £145.2 billion in cross-border payments for 15.6 million people and businesses. In other words, this is not a badly designed database inside a startup that never matured. It is a platform that grew so much that it eventually ran into a modeling decision made when the project was barely getting started.
For sysadmins and programmers, the case is especially valuable because it touches an uncomfortable truth: many times, the problem is not today’s database, but yesterday’s assumptions. And those assumptions, when they affect data types, primary keys, sequences, indexes, partitioning, foreign keys, replication, or API contracts, rarely break all at once. They turn into a slow-moving bomb.
The mistake was not using INT; it was assuming it would never be too small
From a strictly technical perspective, using INT for a sequential identifier was not insane in 2010. In fact, for years it was common across countless systems. The problem appears when that data type stops being an internal detail and becomes a system-wide constraint.
At a fintech like Wise, a transfer ID does not usually live in a single table. It can appear in foreign keys, queues, logs, ETL pipelines, support tools, anti-fraud systems, audits, internal dashboards, exports, API documentation, and third-party integrations. Changing it is not just an ALTER TABLE. It is a migration project.
That is the part most relevant to sysadmins, SREs, and backend developers: the real cost of these decisions is not the four bytes of difference between an INT and a BIGINT. It is the complexity of migrating a core identifier once the business depends on it across dozens of layers.
In a modern system, a migration like this forces teams to think about:
- backward compatibility in APIs and clients;
- dual-write or shadow columns during transition;
- online backfill without degrading production;
- impact on indexes and storage;
- changes in partitioning or sharding if they exist;
- validation across replicas, backups, and restores;
- observability to detect old reads or writes;
- and coordination between applications, teams, and deployment windows.
In other words, the drama is not that an INT runs out. The drama is everything that INT has already spread into over 16 years.
Famous technical debt almost always looked reasonable at the start
The Wise case fits neatly alongside other classic examples of technical debt or software decisions that looked sensible when they were made and ended up costing a great deal later.
The most famous example remains Y2K. For decades, many systems stored the year using two digits in order to save space in memory and storage. In the short term, that made sense. In the long term, it became a global remediation effort to avoid failures when the calendar rolled from 99 to 00. NIST documented at the time how even embedded systems were exposed to this kind of historical assumption.
Another iconic case is Mars Climate Orbiter. NASA’s official report concluded that the spacecraft was lost because of a mismatch between imperial and metric units in the navigation software, a failure that destroyed a mission valued at $125 million. More than an isolated bug, it was a collision of incompatible assumptions between teams and software.
In finance, Knight Capital offers another well-known lesson. The SEC explained that the August 2012 incident was caused, among other things, by two critical technology failures in the deployment of software for its automated routing system. The outcome was a loss of more than $460 million in less than an hour. Here the debt was not a data type, but legacy code and deployment processes that were not under the level of control a critical platform required.
And if one wants the harshest example of all, Therac-25 remains mandatory reading. The case became a symbol of what happens when software and process replace physical safeguards without the necessary level of review and safety. The University of Texas summarizes it as one of the most serious software-failure cases in critical systems, with six deaths associated with it.
All these cases share one thing: they did not begin as “obvious mistakes” to the people who made them. They began as reasonable shortcuts, assumed integrations, optimization decisions, or contextual simplifications. Only over time did they become structural problems.
What a sysadmin or programmer should read between the lines
The Wise story leaves a very useful lesson for technical teams: there are decisions worth over-dimensioning a little from day one, even if the system is still small.
This does not mean designing a planet-scale platform for a three-person startup. It means recognizing which parts of the system are especially expensive to change later. Primary identifiers, time formats, public contracts, state semantics, correlation keys, log schemas, and partitioning conventions often fall into that category.
Put differently: some components tolerate improvisation and others do not. Choosing the wrong web library may be annoying for a few months. Choosing the wrong identity model for your records may follow you for a decade and a half.
From an infrastructure perspective, cases like this also show why observability and a real dependency inventory matter so much. Many organizations assume they will “just migrate later,” but they only discover the scale of the problem when they have to find out where an identifier is actually used, which tables depend on it, which services still expect an INT, which external systems parse that value, and which dashboards break if its format changes.
Technical debt rarely explodes first in the visible layer. It usually appears in the least glamorous places: jobs, exports, support scripts, legacy integrations, data pipelines, caches, and dashboards nobody fully documented.
A proposed comment from David Carrero
Proposed quote for editorial review from David Carrero, cofounder of Stackscale:
“Cases like Wise are a reminder of something we see constantly in infrastructure: almost no serious problem is born on the day it fails, but on the day someone assumed a limit would never be reached. In cloud and distributed systems, the real cost is usually not the original technical decision itself, but everything built around it until it becomes a structural dependency.”
That observation fits especially well in a publication aimed at sysadmins and programmers because it points directly to where the pain really is: not in the data type itself, but in the accumulation of layers that end up depending on it.
The good news: this is the kind of technical debt most companies would like to have
It is important not to lose perspective. Wise getting close to exhausting a 32-bit transfer ID space does not sound like an existential business tragedy. If anything, it sounds like a form of technical debt born from success.
A great many technology companies never reach the point where a decision made in their first year becomes a real scale problem 16 years later. Wise did. And that turns the incident into a story worth learning from, almost amusing in its origin, but very serious in the lesson it carries.
Because the message for any technical team is clear: the most dangerous lines of code are not always the most complex ones. Sometimes they are the smallest. The ones nobody questions. The ones written in an afternoon under the assumption that there will always be time to fix them later.
Frequently asked questions
Why is using a 32-bit integer for IDs a problem?
Because it has a maximum limit of 2,147,483,647 positive values. In systems that generate huge numbers of records over many years, that space can eventually run out.
Would using BIGINT from the beginning have solved it?
Yes, technically it would have provided far more headroom. The problem is that many early architectural decisions are made with the context and needs of the moment, not with the scale a company may reach a decade later.
Why is migrating from INT to BIGINT so complicated?
Because it usually affects much more than one table. It can impact indexes, foreign keys, APIs, internal tools, external integrations, data lakes, logs, queues, and legacy systems.
Is this bad news for Wise?
Technically it is a serious engineering challenge, but strategically it reflects that the company has reached very large scale. In a way, this is the kind of technical debt many companies would prefer to have.
