In 2023 and 2024, we partnered with a client in the US that manages devices on a local private network. This client provides both software applications and hardware compute resources, such as virtual machines and physical servers, to their own customers.
The Client’s Challenge
Before the partnership with BEAMOps’, our client faced a range of significant challenges. Each of their own customers used different virtual machines (VMs) with varying operating systems, so they struggled to quickly and consistently create new customer environments and debug their existing ones. The client also struggled with long deployment times and had a software application that would not be scalable according to their projected growth.
Our Approach & Solution
To address these issues, we focused on improving environment integrity, deployment efficiency, and application stability by applying several of our key principles. We wanted to automate the provisioning of on-premise infrastructure, standardise their environments, automate their deployment process, and migrate a legacy service into Elixir.
-
Packaging and Versioning: Our first step was to Dockerise the client’s services and store their built images in GitHub’s Container Registry. This was implemented via continuous integration (CI) pipelines using GitHub Actions. By doing this, we ensured that all services were packaged in an agnostic way, which would allow deployment to any environment in the same way ensuring versioning and consistency across all environments. Prior to this all versions were tracked in JIRA, this enabled a more streamlined process and ensuring that the code representations were the source of truth.
-
Automated Deployments for Internal Environments: We enabled automated deployments to internal environments, significantly accelerating the development and testing of new features. By integrating continuous deployment (CD) pipelines into the workflow, we ensured that every change was automatically validated and deployed in a controlled manner. This improved the QA process by reducing the time required to identify and resolve issues, allowing faster iterations and increased developer productivity.
-
Simplifying Software Orchestration: To manage both the development and production environments of the software our client supplies to their customers, we used a single Docker Compose file with environment-specific configurations through environment variables. We chose to only use Docker in our implementation and not Kubernetes (K8s). This was done consciously, as we wanted to simplify the operations for the client’s internal team. None of their in-house developers had previous orchestration experience and we prioritised the fact that Docker is easier to use and has a less-steep learning curve while still offering the simplicity and flexibility needed to manage environments effectively. It was important to us that their internal development team could handles operations smoothly once our contract had come to an end without the need for extra help.
-
Creating Reproducible Environments: One of the main challenges was the disparity between different customer environments. Some customers used Linux VMs while others used Windows machines for running the C# application, leading to deployment inconsistencies and making it difficult to replicate customer environments locally. To address this, we leveraged Packer to automate the creation of machine images, generating OVA (Open Virtual Appliance) files to ensure uniformity across virtual machines. This allowed us to standardize the VM creation process, ensuring a consistent setup for all customers and enabling the client to quickly spin up, destroy, and recreate environments as needed, drastically reducing time spent on on boarding a new customer. Also allowing to destroy an re-create environments in their internal infrastructure with ease.
-
Migrating the Legacy Service: The main API for our client’s software application was written in C#. The system exposed a WebSocket service, which provided real-time updates and communication between the frontend and devices. We migrated this service incrementally to Elixir, applying the Kaizen principle of continuous improvement. Our Elixir application handled migrated components and forwarded requests to the legacy system for non-migrated parts. The decision to migrate this service into Elixir was done because our client planned to scale their business and needed an application that could handle and show the status of 1000s of devices in a local network. Elixir was the perfect candidate due to its inherent concurrent, resilient and fault-tolerant nature and its fantastic ability to handle real-time updates. We completed this migration in 6 months.
-
Monitoring & Visualization: To ensure transparency and continuous improvement, we integrated Grafana, loki, and PromEx into the Docker Compose setup, enabling robust monitoring and visualization of the application’s performance. This provided the client with real-time insights into the health of their application suite.
Results & Impact
The changes we implemented led to significant improvements across the board. By automating the creation of VM images and streamlining deployment, the client reduced setup time and improved environment consistency. Our work enabled quicker feature deployment, accelerated migration of legacy systems, and boosted internal development processes.
Our efforts also resulted in:
- Faster setup for new client environments, helping the client scale their business
- Reduced deployment times in internal environments
- Increased application stability through regular deployments and automated workflows
- Improved internal development pace, allowing for quicker delivery of new features
- Legacy system migration to Elixir, enhancing the application’s scalability and real-time capabilities