{"id":7801,"date":"2019-10-17T03:43:59","date_gmt":"2019-10-16T20:43:59","guid":{"rendered":"http:\/\/54.151.235.32\/?p=7801"},"modified":"2021-03-03T18:00:26","modified_gmt":"2021-03-03T11:00:26","slug":"serverless-containers-are-the-future-of-container-infrastructure-2","status":"publish","type":"post","link":"https:\/\/renovacloud.com\/en\/serverless-containers-are-the-future-of-container-infrastructure-2\/","title":{"rendered":"SERVERLESS CONTAINERS ARE THE FUTURE OF CONTAINER INFRASTRUCTURE"},"content":{"rendered":"<p>In an effort to increase system stability, developer productivity, and cost effectiveness, IT and DevOps teams have undergone two major shifts in recent years. The first was the move from on-premises infrastructure to the cloud. This shift allows companies to remove a significant amount of overhead in managing their technologies while scaling capacity up or down as needed, and adding the flexibility to use a plethora of virtual machine (VM) types, sizes, and pricing models.<\/p>\n<p>The second and more recent shift is the major pivot to containers and serverless solutions, with Kubernetes taking charge and becoming the industry standard for container orchestration. The combination of these two shifts presents organizations with a unique question: how do you maximize an application\u2019s uptime while maintaining a cost-effective infrastructure at both layers?<\/p>\n<p>Keeping availability high by over-provisioning is easy, but it\u2019s also very expensive. As a result, several challenges have arisen on the path to building an optimized, cost-effective, and highly available containerized infrastructure on Amazon Web Services (AWS):<\/p>\n<ul>\n<li>Pricing<\/li>\n<li>Instance sizing<\/li>\n<li>Containers utilization<\/li>\n<\/ul>\n<p>In this post, we will explore the Spotinst platform and review how it solves these challenges with Serverless Containers.<\/p>\n<p><strong><a href=\"https:\/\/spotinst.com\/\" rel=\"noopener\">Spotinst\u00a0<\/a><\/strong>is an\u00a0<a href=\"https:\/\/aws.amazon.com\/partners\/\" rel=\"noopener\">AWS Partner Network<\/a>\u00a0(APN) Advanced Technology Partner with the\u00a0<a href=\"https:\/\/aws.amazon.com\/blogs\/apn\/team-up-with-aws-competency-partners-for-better-business-and-bigger-results\/\" rel=\"noopener\">AWS Container Competency<\/a>. Spotinst helps companies automate cloud infrastructure management and save on their AWS computing costs by leveraging\u00a0<a href=\"https:\/\/aws.amazon.com\/ec2\/spot\/\" rel=\"noopener\">Amazon EC2 Spot Instances<\/a>\u00a0with ease and confidence.<\/p>\n<h3>Challenges<\/h3>\n<p>To make the most cost-effective use of the cloud, users strive to achieve a flexible infrastructure that will scale up or down based on resource requirements such as CPU or memory utilization.<\/p>\n<p>While natively scaling the application layer, Kubernetes does little to manage the underlying infrastructure layer.<\/p>\n<p>As like most container orchestrators, it was designed with a physical data center in mind and assumes the capacity is always available as applications scale up and down.<\/p>\n<p>Let\u2019s take a closer look at the challenges business face when building optimized, cost-effective, and highly available infrastructure on AWS.<\/p>\n<h3>Pricing<\/h3>\n<p>With three available pricing models on AWS\u2014On-Demand, Reserved, and Spot\u2014we need to decide which model works best for every part of the workload.<\/p>\n<p>Kubernetes does a great job with resilience and handling the interruption or replacement of servers. This makes it a great candidate to leverage Spot instances at a 70 percent discount compared to On-Demand prices.<\/p>\n<p>Then again, it\u2019s important to maintain some persistent instances for certain workloads as well as cost predictability. That\u2019s where buying a pool of\u00a0<a href=\"https:\/\/aws.amazon.com\/ec2\/pricing\/reserved-instances\/\" rel=\"noopener\">Amazon EC2 Reserved Instances<\/a>\u00a0(RIs) ensures a baseline capacity that is cost effective.<\/p>\n<h3>Instance Sizing<\/h3>\n<p>The second and most complex challenge is choosing the right infrastructure size and type to satisfy the actual containers requirements.<\/p>\n<p>Should we stick to one instance type? Should we mix families or instance sizes? What should we do when a GPU-based pod comes up to do a quick job a few times a day?<\/p>\n<p>Traditionally, the answer to all of these questions has been, \u201cIt varies.\u201d As time goes on, clusters grow and more developers freely push new and differing containers into Kubernetes, resulting in constantly changing answers to the questions above.<\/p>\n<p>Moreover, we need to figure out how to make sure it all keeps scaling and addressing changes in application requirements. To illustrate this point, consider that some containers require 0.2vCPU and 200MB of memory, and others require 8vCPU and 16GB of memory or more.<\/p>\n<p>As a result, setting the right auto scaling policy based on simple CPU or memory thresholds rarely stays reliable in the long term.<\/p>\n<h3>Containers Utilization<\/h3>\n<p>The third and least tackled challenge is the containers utilization. This involves constantly adapting the limits and resource requests of the containers based on their consumption and utilization.<\/p>\n<p>Even after solving for the first two issues and successfully using the right instance sizes for our containers, how do we know the container wasn\u2019t over-allocated resources?<\/p>\n<p>Getting a pod of 3.8vCPU and 7.5GB of memory on a c5.xlarge instance is great as it puts the instance at nearly 100 percent allocation, but what if that pod only uses half of that? That would mean half of our instance resources are wasted and could be used by other containers.<\/p>\n<p>Successfully solving for all of these issues can cut costs and free developer time by having infrastructure and containers that self-optimize in real-time based on application demands.<\/p>\n<h3>Spotinst Automation Platform<\/h3>\n<p>Spotinst built a DevOps automation platform that helps customers save time and money.<\/p>\n<p>The core of the Spotinst platform is a sophisticated infrastructure scheduling mechanism that is data-driven. It allows you to run production-grade workloads on Spot instances, while utilizing a variety of instance types and sizes. Spotinst offers an enterprise-grade SLA on Spot Instances while guaranteeing the full utilization of RIs.<\/p>\n<p>On top of this automation platform, Spotinst built\u00a0<strong><a href=\"https:\/\/spotinst.com\/products\/ocean\/\" rel=\"noopener\">Ocean<\/a><\/strong>, a Kubernetes pod-driven auto scaling and monitoring service. Ocean automatically and in real-time solves the challenges we discussed earlier, while creating a serverless experience inside your AWS environment, and it does so without any access to your data.<\/p>\n<p>Additionally, Ocean helps you right-size the pods in your cluster for optimal resource utilization.<\/p>\n<p>Spotinst solves the pricing and instance sizing challenges by using dynamic container-driven auto scaling based on flexible instance size and life cycles.<\/p>\n<p>There are three main components at play here: dynamic infrastructure scaling, pod rescheduling simulations, and cluster headroom. This is how they work.<\/p>\n<h3>Dynamic Infrastructure Scaling<\/h3>\n<p>Spotinst Ocean allows customers to run On-Demand, Spot, and Reserved Instances of all types and sizes within a single cluster. This flexibility means the right instance can be provisioned when it\u2019s needed, while guaranteeing minimum resource waste.<\/p>\n<p>Whenever the Kubernetes Horizontal Pod Autoscaler (HPA) scales up, or a new deployment happens, the Ocean auto scaler reads the overall requirements of that event. These include:<\/p>\n<ul>\n<li>CPU reservation<\/li>\n<li>Memory reservation<\/li>\n<li>GPU request<\/li>\n<li>ENI limitations<\/li>\n<li>Pod labels, taints, and tolerations<\/li>\n<li>Custom labels (instance selectors, on-demand request)<\/li>\n<li>Persistent volume claims<\/li>\n<\/ul>\n<p>After aggregating the sum of these requests, Ocean calculates the most cost-effective infrastructure to answer the request. This can be anywhere from a single t2.small to a combination of several M5.2xlarge, C5.18xlarge, P3.xlarge, R4.large, or anything in between.<\/p>\n<p>The use of dynamic instance families and sizes helps gain a highly defragmented with high resource allocation cluster as it grows and scales up.<\/p>\n<h3>Pod Rescheduling Simulations<\/h3>\n<p>This dynamic infrastructure scaling ensures maximum resource allocation as we scale up and grow our clusters.<\/p>\n<p>To make sure the allocation is maintained when Kubernetes scales the pods down as application demands decrease, Ocean runs pod rescheduling simulations every 60 seconds to figure out if the cluster can be further defragmented.<\/p>\n<p>By considering all the requirements, Ocean checks for any expendable nodes. These can have their pods drained and the node removed from the cluster without having any of the pods go into a pending state, and get scheduled to other existing nodes in the cluster.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-1.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-14079 size-full\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-1.png\" alt=\"Spotinst-Serverless-1\" width=\"1239\" height=\"456\" \/><\/a><\/p>\n<p><em>Figure 1 \u2013 Pod rescheduling simulation.<\/em><\/p>\n<p>In\u00a0<em>Figure 1<\/em>, you can see the pod rescheduling simulation when scaling down expendable nodes, considering pod and instance resources as well as other restrictions. In this example, we were able to reduce the cluster size by about 10 percent pre-scale versus post-scale CPU and memory.<\/p>\n<h3>Cluster Headroom<\/h3>\n<p>Scaling the infrastructure when it\u2019s needed can achieve maximal node resource allocation, but may also slow the responsiveness of our application.<\/p>\n<p>While high availability is critical, waiting for new nodes to spin up every time capacity is needed can lead to less-than-optimal latency and other performance issues. This is where the cluster headroom comes in. Headroom is a capacity of available units of work in the sizes of the most common deployments that stay available in order to allow Kubernetes to instantly schedule new pods.<\/p>\n<p>Headroom is automatically configured by Ocean based on the size of each deployment in the cluster, and changes dynamically over time.<\/p>\n<p>In practice, effective headroom means that more than 90 percent of scaling events in Kubernetes instantly get scheduled on the cluster into the headroom capacity, combined with a call for a headroom scaling event to be prepared for the next time Kubernetes scales.<\/p>\n<p>In case an abnormal deployment happens that requires more than the available headroom, the cluster auto scaler handles the scaling events with a single scaling event of varying instance types and sizes to cover the entire deployment resource requirements. This helps Ocean achieve a 90 percent resource allocation in the cluster while staying more responsive than a traditional step scaling policy.<\/p>\n<h3>Monitoring CPU and Memory Utilization<\/h3>\n<p>Once the pods are placed on the instances, the Ocean Pod Right-Sizing mechanism will start monitoring the pods for CPU and memory utilization. Once enough data is aggregated, Ocean starts pushing recommendations on how to right-size the pod.<\/p>\n<p>For example, if a pod was allocated with 4GB of memory and over a few days its usage is between 2GB and 3.4GB , with an average of 2.9GB, Ocean Right-Sizing will recommend sizing the pod at ~3.4GB, which further helps\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Bin_packing_problem\" rel=\"noopener\">bin pack<\/a>\u00a0and defragment the cluster while keeping it highly utilized.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-2.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-14080 size-full\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-2.png\" alt=\"Spotinst-Serverless-2\" width=\"1100\" height=\"686\" \/><\/a><\/p>\n<p><em>Figure 2 \u2013 Deployment with Amazon EKS best practices.<\/em><\/p>\n<p>In\u00a0<em>Figure 2<\/em>, you can see the deployment architecture of Ocean +\u00a0<a href=\"https:\/\/aws.amazon.com\/eks\/\" rel=\"noopener\">Amazon Elastic Kubernetes Service<\/a>\u00a0(Amazon EKS) where EKS managed the Kubernetes control pane and Ocean the worker nodes. The Amazon EKS architecture and security stays in place, and Ocean wraps around the worker nodes to dynamically orchestrate them.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-3.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14081\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-3.png\" alt=\"Spotinst-Serverless-3\" width=\"979\" height=\"475\" \/><\/a><\/p>\n<p><em>Figure 3 \u2013 Deployment with any container orchestrator on AWS.<\/em><\/p>\n<p>In\u00a0<em>Figure 3<\/em>, you see the general Ocean architecture with the worker nodes connecting to one of the supported container orchestrators. On the left side, you can see the variety of container management control planes and on the right, Ocean wrapping around the worker nodes.<\/p>\n<h3>Implementation Walkthrough<\/h3>\n<p>Deploying Spotinst Ocean for your existing Kubernetes implementation is simple and can be done in just a few minutes using your preferred deployment method, such as\u00a0<a href=\"https:\/\/api.spotinst.com\/provisioning-ci-cd-sdk\/provisioning-tools\/cloudformation\/examples\/ocean-cluster-2\/\" rel=\"noopener\">Amazon CloudFormation<\/a>,\u00a0<a href=\"https:\/\/api.spotinst.com\/provisioning-ci-cd-sdk\/provisioning-tools\/terraform\/resources\/terraform-v-2\/ocean-aws\/\" rel=\"noopener\">Terraform<\/a>,\u00a0<a href=\"https:\/\/github.com\/spotinst\/spotinst-ansible-module\/blob\/master\/examples\/ocean\/spotinst-ocean.yml\" rel=\"noopener\">Ansible<\/a>,\u00a0<a href=\"https:\/\/api.spotinst.com\/spotinst-api\/ocean\/ocean-cloud-api\/ocean-for-aws\/create\/\" rel=\"noopener\">RESTful API<\/a>, or the\u00a0<a href=\"https:\/\/api.spotinst.com\/ocean\/tutorials\/ocean-for-aws\/create-an-ocean-cloud-cluster\/\" rel=\"noopener\">Spotinst UI<\/a>.<\/p>\n<p>Ocean orchestrates within your AWS environment and leverages existing cluster components, including virtual private clouds (VPC), subnets,\u00a0<a href=\"https:\/\/aws.amazon.com\/iam\/\" rel=\"noopener\">AWS Identity and Access Management<\/a>\u00a0(IAM) roles, images, security groups, instance profiles, key pairs, user data, and tags.<\/p>\n<p>In a nutshell, Ocean will fetch all of the existing configurations from your cluster to build the configuration for the worker nodes that it will spin up. Then, all you have to do is deploy the Spotinst Kubernetes Cluster Controller into your cluster so that Ocean can get the Kubernetes metrics reported to it.<\/p>\n<p>Once this is done, Ocean will handle all infrastructure provisioning for you going forward. You should be able to give your developers complete freedom to deploy whatever they choose.<\/p>\n<p>Let\u2019s walk through a set up so you can see how simple it is.<\/p>\n<ul>\n<li>First, navigate on the left to\u00a0<strong>Ocean<\/strong>\u00a0&gt;\u00a0<strong>Cloud Clusters<\/strong>, and then choose\u00a0<strong>Create Cluster<\/strong>.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-4.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14082\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-4.png\" alt=\"Spotinst-Serverless-4\" width=\"1019\" height=\"449\" \/><\/a><\/p>\n<ul>\n<li>On the next screen, choose to either join an existing cluster or create a new one.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-5.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14083\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-5.png\" alt=\"Spotinst-Serverless-5\" width=\"1169\" height=\"514\" \/><\/a><\/p>\n<ul>\n<li>In the\u00a0<strong>General<\/strong>\u00a0tab, choose the cluster name, region and Auto Scaling Group to import the worker nodes configurations from.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-6.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14084\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-6.png\" alt=\"Spotinst-Serverless-6\" width=\"1041\" height=\"514\" \/><\/a><\/p>\n<ul>\n<li>On the\u00a0<strong>Configuration<\/strong>\u00a0page, verify that all of your configurations were imported and click\u00a0<strong>Next<\/strong>.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-7.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14085\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-7.png\" alt=\"Spotinst-Serverless-7\" width=\"1184\" height=\"583\" \/><\/a><\/p>\n<ul>\n<li>To install the controller, generate a token in Step 1 of the wizard and run the script provided in Step 2. Wait two minutes and test connectivity; a green arrow will appear.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-8.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14086\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-8.png\" alt=\"Spotinst-Serverless-8\" width=\"1129\" height=\"555\" \/><\/a><\/p>\n<ul>\n<li>On the summary page, you can see the JSON configuration of the cluster, and get a Terraform template populated with all of the cluster\u2019s configurations.<br \/>\n.<\/li>\n<li>Once reviewed, create the cluster and Ocean will handle and optimize your infrastructure going forward.<\/li>\n<\/ul>\n<h3>Optimization Data<\/h3>\n<p>Now that we know what Ocean is and how to deploy it, let\u2019s look at some of the numbers and benefits of real usage scenarios, and how using dynamic infrastructure can help.<\/p>\n<p><strong>Example #1: Pod requires 6,500mb of memory and 1.5vCPUs \u2013 Different Instance Size<\/strong><\/p>\n<p>Running five such pods on m5.large or m5.xlarge instances will cost $0.096 per hour per pod. However, going up to 2XL will allow further bin-packing and reduce the cost to $0.077 per hour per pod.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-9.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14087\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-9.png\" alt=\"Spotinst-Serverless-9\" width=\"1228\" height=\"119\" \/><\/a><\/p>\n<p>As seen in the chart above, by going a few sizes up, we can achieve better resource allocation and reduce the cost by 20 percent.<\/p>\n<p><strong>Example #2: Pod of 6500mb of memory and 1.8vCPUs \u2013 Different Instance Family<\/strong><\/p>\n<p>Running replicas of such pods on c5.xlarge will only allow for one pod per instance, while an m5.xlarge will allow for two and reduce the cost from $0.17 per hour per pod to $0.096 per hour per pod.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-10.png\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-14088\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/77de68daecd823babbb58edb1c8e14d7106e83bb\/2019\/10\/08\/Spotinst-Serverless-10.png\" alt=\"Spotinst-Serverless-10\" width=\"2212\" height=\"161\" \/><\/a><\/p>\n<p>As seen in the chart above, by changing instance family, we can achieve better resource allocation and reduce the cost by 43 percent.<\/p>\n<h3>Conclusion<\/h3>\n<p>To summarize, when running a Kubernetes cluster on AWS there are three main challenges in cost optimization and availability that Spotinst Ocean helps automate and solve.<\/p>\n<ol>\n<li><strong>Pricing model:<\/strong>\u00a0Spotinst Ocean automatically gets you to 100 percent Reserved Instance coverage to preserve your investment and leverages cost effective Spot instances beyond that.<\/li>\n<li><strong>Instance sizing:<\/strong>\u00a0Using its container driven auto scaler, Ocean spins up the right instance at the right time based on pod requirements, so you no longer have to deal with it.<\/li>\n<li><strong>Containers utilization:<\/strong>\u00a0Ocean monitors your containers utilization and how to right-size them and avoid idle resources.<\/li>\n<\/ol>\n<p>With all of this overhead out of the way, managing Kubernetes clusters on AWS is easier than ever. If you\u2019re new to Kubernetes, check out the\u00a0<a href=\"https:\/\/aws.amazon.com\/quickstart\/architecture\/spotinst-ocean-eks\/\" rel=\"noopener\">Amazon EKS + Spotinst Ocean Quick Start<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<p>Article in collaboration with Spotinst (spotinst.com)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In an effort to increase system stability, developer productivity, and cost effectiveness, IT and DevOps teams have undergone two major shifts in recent years. The first was the move from on-premises infrastructure to the cloud. This shift allows companies to remove a significant amount of overhead in managing their technologies while scaling capacity up or [&#8230;]\n","protected":false},"author":7,"featured_media":7622,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[145,144,35,141,168],"class_list":["post-7801","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-and-analytics","tag-container","tag-serverless","tag-aws","tag-google-cloud","tag-kubernetes"],"_links":{"self":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts\/7801","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/comments?post=7801"}],"version-history":[{"count":0,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts\/7801\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/media\/7622"}],"wp:attachment":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/media?parent=7801"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/categories?post=7801"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/tags?post=7801"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}