Why Telecom Engineers Make Great CTOs
Systèmes distribués, contraintes temps réel, conformité réglementaire et protocoles antérieurs au web. Le bagage télécom qui a façonné ma façon de penser la construction de produits à grande échelle.
People sometimes ask me how I ended up in tech leadership coming from telecom. The assumption is that telecom is a legacy industry, full of old protocols and rigid infrastructure. That is partly true. But it is also the best training ground I have found for building products at scale.
I spent years working with voice infrastructure at Legos, where we handled 1.2 billion minutes per year across 110 countries. Later, I co-founded Cantoo, a cloud telecom platform. Now I lead engineering at Consentio, a supply chain platform. The telecom background shaped how I think about every one of these systems.
Real-Time Is Not Optional
In web development, you can often get away with eventual consistency. If a notification arrives a few seconds late, nobody dies. If a page takes an extra 200 milliseconds to load, users might not even notice.
In telecom, latency kills. Literally, in some cases, because voice infrastructure serves emergency services. When someone dials a number, they expect to hear a ringtone within a second. If the call setup takes three seconds, something is broken. If it takes five seconds, you lose the customer.
This trained me to think about performance as a product requirement, not an optimization task. At Consentio, where we handle real-time trading for perishable produce, that mindset transferred directly. When a buyer sends an order for 20 tons of tomatoes, they need confirmation fast. The produce is literally spoiling while they wait.
Most web developers think about latency as a nice-to-have metric. Telecom engineers think about it as a contractual obligation. That shift in perspective changes how you design everything from database queries to API contracts.
Protocols Teach You to Think in Contracts
SIP, SS7, SMPP, Diameter. Telecom is built on protocols that were designed by committees, debated for years, and implemented by dozens of vendors who all interpret the specifications slightly differently.
Working with these protocols teaches you something that no amount of REST API design will: the importance of strict contracts between systems. When your SIP gateway talks to a carrier, both sides need to agree on exactly what every header means, what every response code indicates, and what happens when things go wrong.
This maps directly to how I design internal APIs. At OneRagtime, the boundary between the Platform (investor-facing), the Core (operations), and the Deepdive (data pipeline) was defined as clearly as a telecom protocol boundary. Each system had its own data model, its own deployment cycle, and a strict contract for how it communicated with the others.
Most startups I audited during my time at OneRagtime did not think this way. Their services were tightly coupled, sharing databases, importing each other's code, making assumptions about internal state. It worked when there were two developers. It broke down completely at ten.
Failure Is the Default State
Telecom systems fail constantly. Carriers go down. Routes degrade. Hardware dies. Fraud traffic spikes. The question is never "will this fail?" but "how fast can we recover?"
At Legos, I was on a 24/7 on-call rotation managing SIP cores, SMS gateways, and fraud detection systems for 100 operators. When something broke at 3am, the fix needed to happen in minutes, not hours. You learn very quickly to design for graceful degradation.
At Cantoo, we built the voice infrastructure with auto-healing as a core principle. If a routing node went down, traffic automatically shifted to healthy nodes. If a carrier started returning errors, the routing engine would deprioritize that carrier in real-time. No human intervention needed.
This is something most web applications still get wrong. They design for the happy path and treat failures as edge cases. But at scale, edge cases become daily occurrences. A system that handles failure gracefully is more valuable than a system that is theoretically faster but brittle.
Regulatory Compliance as a Feature
Telecom is one of the most regulated industries on the planet. To operate in the UK, France, and Germany, as we did with Cantoo, you need separate licenses for each country. You need to comply with lawful interception requirements, data retention rules, numbering plan regulations, and emergency services obligations.
Most developers see compliance as a burden. In telecom, you learn to see it as a competitive advantage. Getting licensed in three countries took us months of work. But once we had those licenses, they became a moat. Any competitor wanting to offer the same service had to go through the same painful process.
At Consentio, I apply the same thinking to our ISO certifications (27001, 27017, 27018). Yes, the certification process is demanding. But it gives our clients, including Aldi, Carrefour, and Kroger, confidence that their data is handled properly. Compliance is not overhead. It is a selling point.
Capacity Planning Is Not Guessing
In telecom, you do not scale on demand. You plan capacity months in advance because provisioning circuits with carriers takes time. You need to know how much traffic you will handle next quarter, which routes will grow, and where you need redundancy.
This discipline transferred directly to how I think about infrastructure planning at every company since. At Cantoo, even though we ran on Kubernetes with auto-scaling, we still did quarterly capacity reviews. Auto-scaling handles spikes, but it does not handle structural growth. If your user base doubles, you need to think about database capacity, storage costs, network bandwidth, and operational overhead long before the auto-scaler kicks in.
The startups I audited at OneRagtime that struggled with scaling were almost always the ones that relied entirely on auto-scaling and never did any forward planning. They would hit a database connection limit or a third-party API rate limit and scramble to fix it in production.
Debugging Distributed Systems
Telecom engineers debug distributed systems by default. A voice call might traverse five or six systems between the caller and the callee: the originating gateway, the routing engine, the billing system, the destination carrier, and various protocol translators along the way. When call quality degrades, you need to trace the problem across all of these components.
I brought this thinking to every engineering team I have led. At Consentio, when an order fails to process, the debugging process follows the same pattern: trace the request across services, examine each handoff point, identify where the data or the timing went wrong.
Most web developers are used to debugging a single application. They look at logs, set breakpoints, and step through code. Debugging distributed systems requires a different mindset: you think in terms of flows, not functions. You care about what happened between systems, not just within them.
The Unsexy Advantage
None of this is glamorous. Protocols, compliance, capacity planning, failure recovery: these are not the topics that get attention at developer conferences. But they are the topics that determine whether your product survives contact with reality.
The best CTOs I know all share one trait: they think about the boring stuff. Not the new framework, not the clever abstraction, but the monitoring, the fallback behavior, the data integrity checks, the deployment rollback process.
Telecom taught me that. And it is the single biggest advantage I carry into every technical leadership role. The industry may be "legacy," but the engineering discipline it produces is anything but.