Cloud & SaaS

Taking a Desktop Application to the Cloud: Lessons from a Real Migration

A behind-the-scenes look at what it actually takes to turn a single-user desktop application into a multi-tenant cloud service — from architecture decisions to the problems nobody warns you about.

Lost Edges Team cloud saas architecture
Taking a Desktop Application to the Cloud: Lessons from a Real Migration
Moving from desktop to cloud is not a lift-and-shift. It is a fundamental rethinking of how your application handles state, users, and scale.

The Starting Point

The application was a specialized engineering tool — a pipe ductile fracture analysis (PDFA) calculator — originally built as a Windows desktop application. One user installs it, runs a calculation, gets a result. Simple.

The client wanted to turn it into a cloud-based SaaS product. Multiple users. Role-based access. The ability to queue hundreds or thousands of calculations as a batch. Reporting. Audit trails. All the things that come with a proper multi-tenant service.

This is not a lift-and-shift. This is a ground-up rethinking of the application architecture.

The Architecture Decisions

Multi-Tenancy

The desktop app had no concept of users or tenants. Every file, every configuration, every result lived in a single directory on a single machine. The cloud version needed strict data isolation between tenants.

We chose a shared-infrastructure, isolated-data model: all tenants run on the same application infrastructure, but each tenant’s data is logically separated at the database level with tenant-scoped queries enforced at the ORM layer. This keeps infrastructure costs manageable while maintaining data isolation.

The Computation Engine

The core calculation engine was originally written in Python. For the desktop version, it ran synchronously — the user clicks “Calculate,” waits, and gets a result. For the cloud version, calculations needed to run asynchronously so the web interface stays responsive.

We wrapped the engine in a worker process that pulls jobs from a message queue. The web frontend submits a calculation request, the request gets queued, a worker picks it up, runs it, and stores the result. The frontend polls for completion or receives a notification via WebSocket.

Batch Processing

This was the biggest new capability. Desktop users ran one calculation at a time. Cloud users wanted to define a batch of hundreds or thousands of parameter combinations and run them all at once.

We built a batch orchestrator that:

  • Accepts a batch definition (parameter ranges, step sizes, or explicit parameter lists)
  • Generates individual calculation tasks
  • Queues them with appropriate priority and rate limiting
  • Tracks progress and handles failures with automatic retry
  • Aggregates results into downloadable reports when the batch completes

The queue is backed by a persistent message broker so nothing gets lost if a worker crashes mid-calculation. Failed tasks are retried with exponential backoff, and permanently failed tasks are flagged for manual review.

EOS Selection Per Batch

One feature the client specifically requested: the ability to select a different equation of state (EOS) for each batch. The desktop version was hardcoded to a single EOS. The cloud version needed to support GERG-2008, Peng-Robinson, and others — and let the user choose at batch creation time.

This required changes to the computation engine, the batch definition schema, and the results schema. Each result now carries metadata about which EOS produced it, making results comparable across different models.

Security

The desktop app had no security layer at all. It did not need one — it ran on a local machine behind a corporate firewall.

The cloud version needed:

Authentication. We implemented OAuth 2.0 with support for SSO providers. Every API request is authenticated with a JWT that carries tenant and role information.

Authorization. Role-based access control with three tiers: admin (manage users, view all results), engineer (run calculations, view own results), and viewer (read-only access to shared results).

Data isolation. Every database query is scoped to the authenticated tenant. There is no API endpoint that can return data from another tenant, regardless of the parameters passed.

Encryption. Data at rest is encrypted at the storage layer. Data in transit uses TLS everywhere — between the browser and the API, between the API and the workers, and between the workers and the database.

Audit logging. Every calculation request, batch submission, and result access is logged with timestamp, user, and tenant context.

Scaling

The desktop app had no scaling concerns. One user, one machine. The cloud version needed to handle multiple tenants running large batches simultaneously.

Horizontal scaling of workers. The worker pool auto-scales based on queue depth. When a large batch is submitted, new workers spin up to handle the load. When the queue drains, workers scale back down to baseline.

Database scaling. We chose a managed PostgreSQL service with read replicas for reporting queries. Write traffic goes to the primary; read-heavy dashboard and reporting queries go to replicas.

Rate limiting. Per-tenant rate limits prevent any single tenant from consuming all available compute capacity. Limits are configurable per subscription tier.

What We Learned

Assume nothing from the desktop codebase. The original code was full of assumptions about file paths, single-user access, synchronous execution, and unlimited local resources. Very little of it could be reused without significant refactoring.

Invest in the queue early. The batch processing queue was the most complex new component, and getting it right required iterating on the failure handling, retry logic, and progress tracking. Starting this early gave us time to get it right.

Security is not a bolt-on. Designing authentication, authorization, and data isolation into the architecture from the beginning is orders of magnitude easier than adding it later. We have seen projects that tried to add multi-tenancy after the fact — it never goes well.

Users will find the limits. Whatever batch size you think is reasonable, someone will try to submit ten times that. Build rate limiting and graceful degradation from the start.

The migration from desktop to cloud took approximately five months, including the batch processing system, security layer, and initial production deployment. The resulting platform handles workloads that would have been impossible on the desktop — thousands of calculations per batch, multiple concurrent users, and full audit trails for regulatory compliance.

  • Architecture redesign. A desktop app assumes one user, one machine, one session. A cloud service needs multi-tenancy, authentication, job isolation, and horizontal scaling — none of which exist in the original design.
  • Batch processing at scale. What was a single calculation on a desktop becomes thousands of queued jobs in the cloud. We built a queuing system that handles task scheduling, retry logic, and progress tracking across distributed workers.
  • Security from scratch. Desktop apps live behind a firewall. Cloud services live on the internet. Authentication, authorization, data isolation, and encryption had to be designed into every layer.

"The hardest part of taking a desktop app to the cloud is not the cloud. It is realizing how many assumptions the original code makes about running on a single machine."

Cloud Architect – Lost Edges Engineering
← Back to Articles March 1, 2026