So, you have migrated all of your systems to the cloud. Your data is replicated 20 times all over the world and backed up to the moon. You have achieved 99.99% uptime. Your storage is providing millions of input/output operations per second. Latency is so low it cannot even be measured. However, your cloud costs are going through the roof!
Although this is slightly hyperbolic, these types of problems do frequently occur. On-premise systems are migrated to the cloud by architects who focus on the best possible technological solution without paying enough attention to the costs involved. The latest shiny product offering from AWS/Azure/GCP may seem like a must have and be interesting to work on, but if it satisfies requirements that do not exist, then it’s potentially a waste of money.
4 Areas That Could Be Driving Up Cloud Costs
-
- One of the main areas where costs are being incurred unnecessarily is in cloud services that are over specified. That extra large virtual machine might seem like a safe bet when the system is first built, but if after a few months the demand could be handled by a much smaller instance, then there is an opportunity for some considerable cost savings.
- Another key area where we typically identify significant cost savings is data transfer. Of all the costs generated by using cloud services, data transfer is often the one that catches companies off guard. While data ingress is typically free or very cheap, data egress often comes at a cost. Also, transferring data between services and across regions generates extra costs. Reassessing where data flows to and from within a system and whether those flows provide user value is a beneficial exercise. Replicating data to multiple locations might provide super high availability, but typically this is not entirely necessary to meet uptime requirements. Similarly, a 10Gbps network connection might provide a lot of bandwidth for transferring data between locations, but if a 1Gbps connection would suffice, you could theoretically be saving up to 85% in costs.
- Log retention can be a costly oversight as well. Clients may have applications logging in DEBUG mode during pre-production testing and then forget to turn it back to INFO in production. This can generate a huge amount of log data that isn’t required, but uses a lot of resources and consequently accrues cost. Assessing what logs are being stored, and for how long, frequently identifies cost saving opportunities.
- Finally, we cannot talk about cloud cost savings without mentioning unused resources. It may seem obvious, but we frequently see costs being incurred for unused instances, storage volumes, IP addresses, and load balancers, among many other services. A snapshot of volume may have seemed like a great idea in pre-production, but if it’s no longer needed, eliminate the service.
How Can You Optimize Cloud Costs?
CTG helps clients re-align their use of resources to provide the most value to their users. We do this in three key ways:
-
- Ensure the system requirements are defined and the architecture has been designed to meet them, but not unnecessarily exceed them.
- Review the utilization of resources to identify opportunities to downgrade provisioned resources to lower, cheaper tiers to save money.
- Break down monthly invoices and ensure that there are no stray resources being paid for that are not required.
To learn more about this topic, or how CTG can help your organization implement or optimize your cloud deployments, don’t hesitate to contact us.
contact the CTG Team
Social media cookies must be enabled to allow sharing over social networks.