{"id":30599,"date":"2026-05-13T10:52:03","date_gmt":"2026-05-13T03:52:03","guid":{"rendered":"https:\/\/renovacloud.com\/?p=30599"},"modified":"2026-05-13T10:52:03","modified_gmt":"2026-05-13T03:52:03","slug":"how-to-implement-ai-agents-on-aws","status":"publish","type":"post","link":"https:\/\/renovacloud.com\/en\/how-to-implement-ai-agents-on-aws\/","title":{"rendered":"How to Implement AI Agents on AWS in 2026"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">AI agents have moved from research papers into production systems faster than almost any technology in recent memory. This guide covers how to implement AI agents on AWS from architecture selection through to monitoring in a live environment.<\/span><\/p>\n<h2><b>What an AI Agent Actually Does<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Before covering implementation, it helps to be precise about what separates an AI agent from a standard language model interaction.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A regular prompt-and-response cycle ends after one exchange. An AI agent receives a goal, breaks it into steps, decides which tools or data sources to call, executes those calls, evaluates the results, and continues iterating until the goal is complete. This is called a ReAct loop (reason, act, observe) and it is what makes agentic workflows fundamentally different from chatbots.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">On AWS, this reasoning and execution cycle is managed by<\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/agents\/\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Amazon Bedrock Agents<\/span><\/a><span style=\"font-weight: 400;\">, which handles the orchestration layer so your team does not need to build the planning and tool-dispatch logic from scratch.<\/span><\/p>\n<h2><b>Why AI Agent Adoption Is Accelerating Right Now<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">The shift from chat-based generative AI to agentic AI is the defining transition of 2025 and beyond. Where the previous generation of AI applications called a language model and returned a response, agents plan, reason, take actions, and loop back to verify outcomes with far less human involvement. That change in architecture unlocks genuinely new business value.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The numbers reflect this shift clearly. <\/span><a href=\"https:\/\/www.warmly.ai\/p\/blog\/ai-agents-statistics\" rel=\"noopener\"><span style=\"font-weight: 400;\">The AI agents market is valued at approximately $7.92 billion in 2025<\/span><\/a><span style=\"font-weight: 400;\"> and projected to reach $236 billion by 2034. According to PwC, 79% of organizations have already implemented AI agents at some level, and 96% of IT leaders plan to expand their agent deployments in 2025. <\/span><a href=\"https:\/\/blog.arcade.dev\/agentic-framework-adoption-trends\" rel=\"noopener\"><span style=\"font-weight: 400;\">Organizations that deploy agentic systems report up to 70% cost reduction<\/span><\/a><span style=\"font-weight: 400;\"> in the workflows they automate.<\/span><\/p>\n<table>\n<tbody>\n<tr>\n<td><b>79%<\/b><\/p>\n<p><span style=\"font-weight: 400;\">of organizations have implemented AI agents at some level (PwC, 2025)<\/span><\/td>\n<td><b>70%<\/b><\/p>\n<p><span style=\"font-weight: 400;\">cost reduction reported by organizations automating workflows with agentic AI<\/span><\/td>\n<td><b>46.2%<\/b><\/p>\n<p><span style=\"font-weight: 400;\">CAGR for enterprise-focused agentic AI from 2024 to 2030<\/span><\/td>\n<td><b>85%<\/b><\/p>\n<p><span style=\"font-weight: 400;\">of enterprises expected to implement AI agents by end of 2025<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"font-weight: 400;\">For companies running workloads on AWS, the timing is especially good. AWS has built out a comprehensive, production-grade stack for AI agent development and deployment that removes most of the infrastructure heavy lifting that slowed earlier projects down.<\/span><\/p>\n<h2><b>The Core AWS Services for AI Agent Implementation<\/b><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-30600\" src=\"http:\/\/renovacloud.com\/wp-content\/uploads\/2026\/05\/image2-3.png\" alt=\"Modern data center server racks.\u00a0\" width=\"1024\" height=\"765\" \/><\/p>\n<p><span style=\"font-weight: 400;\">AWS provides a tightly integrated set of services for building, running, and monitoring AI agents. Most implementations draw on the same core stack, and understanding how each layer contributes makes the overall architecture much easier to reason about.<\/span><\/p>\n<h3><a href=\"https:\/\/aws.amazon.com\/bedrock\/\" rel=\"noopener\"><b>Amazon Bedrock<\/b><\/a><\/h3>\n<p><span style=\"font-weight: 400;\">Bedrock is the foundation. It provides managed access to models from Anthropic, Meta, Mistral, and Amazon through a single API, and it hosts the agent orchestration layer natively. It integrates directly with<\/span><a href=\"https:\/\/aws.amazon.com\/iam\/\" rel=\"noopener\"> <span style=\"font-weight: 400;\">AWS IAM<\/span><\/a><span style=\"font-weight: 400;\">,<\/span><a href=\"https:\/\/aws.amazon.com\/vpc\/\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Amazon VPC<\/span><\/a><span style=\"font-weight: 400;\">, and<\/span><a href=\"https:\/\/aws.amazon.com\/cloudtrail\/\" rel=\"noopener\"> <span style=\"font-weight: 400;\">AWS CloudTrail<\/span><\/a><span style=\"font-weight: 400;\">, so your agent infrastructure inherits the security and audit controls already in place across your AWS environment.<\/span><\/p>\n<h3><a href=\"https:\/\/aws.amazon.com\/bedrock\/knowledge-bases\/\" rel=\"noopener\"><b>Amazon Bedrock Knowledge Bases<\/b><\/a><\/h3>\n<p><span style=\"font-weight: 400;\">Knowledge Bases connect your agent to internal documents and operational data through a managed retrieval-augmented generation pipeline. You point a knowledge base at an<\/span><a href=\"https:\/\/aws.amazon.com\/s3\/\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Amazon S3<\/span><\/a><span style=\"font-weight: 400;\"> bucket, and Bedrock handles the embedding, vector storage, and retrieval automatically.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When a user asks a question that requires internal context, the agent queries the knowledge base and grounds its response in the retrieved content rather than relying on what the model learned during training.<\/span><\/p>\n<h3><a href=\"https:\/\/aws.amazon.com\/lambda\/\" rel=\"noopener\"><b>AWS Lambda<\/b><\/a><\/h3>\n<p><span style=\"font-weight: 400;\">Lambda is where your agent&#8217;s real-world actions are executed. Each action group in a Bedrock agent maps to a Lambda function that performs the actual work: querying a database, calling an external API, writing a record, or triggering a downstream process.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Lambda&#8217;s serverless model means the function runs only when the agent calls it, costs nothing at rest, and scales automatically. This makes it the natural runtime for agent tools across almost every use case.<\/span><\/p>\n<h3><a href=\"https:\/\/aws.amazon.com\/cloudwatch\/\" rel=\"noopener\"><b>Amazon CloudWatch<\/b><\/a><\/h3>\n<p><span style=\"font-weight: 400;\">Observability is not optional in production AI agents. CloudWatch captures metrics, logs, and traces from your Bedrock agent and Lambda functions. Bedrock provides step-level trace output that shows the agent&#8217;s reasoning at each decision point, including which tools it called, what inputs it passed, and what it received back.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Piping those traces into CloudWatch gives your team the visibility needed to debug unexpected behavior, catch runaway loops, and monitor cost per agent session.<\/span><\/p>\n<h2><b>Step-by-Step on How to Implement AI Agents on AWS<\/b><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-30604\" src=\"http:\/\/renovacloud.com\/wp-content\/uploads\/2026\/05\/image4-2.png\" alt=\"Project team planning on office whiteboard.\" width=\"1024\" height=\"765\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Getting from concept to production follows a consistent sequence regardless of what the agent is designed to do. The steps below reflect how a well-structured implementation typically unfolds.<\/span><\/p>\n<h3><b>Step 1: Define the agent&#8217;s scope and use case<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Before writing a single line of configuration, write down exactly what the agent should accomplish, what data it needs access to, what actions it should be able to take, and where the boundaries of its authority are. Agents that are built without clearly defined scope routinely overreach, consume unnecessary tokens, and produce unpredictable outputs in edge cases. One agent should own one coherent responsibility.<\/span><\/p>\n<h3><b>Step 2: Enable Amazon Bedrock and select a foundation model<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Open the <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon Bedrock<\/span><\/a><span style=\"font-weight: 400;\"> console and request access to the foundation model that best matches your task type. For reasoning-heavy agents, Anthropic Claude models on Bedrock perform well. For tasks involving structured data and tool use, Amazon Nova models are worth evaluating. Model access is granted per AWS Region, so enable it in the Region closest to your user base.<\/span><\/p>\n<h3><b>Step 3: Create the Bedrock Agent and write agent instructions<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">In the Bedrock console under Builder Tools, create a new agent and write the agent instructions. These instructions tell the model what role it plays, what goals it pursues, and how it should behave across different scenarios. Clear, specific instructions reduce hallucination and improve task completion rates significantly. The instruction set is the most important configuration decision in the entire implementation.<\/span><\/p>\n<h3><b>Step 4: Add Action Groups for tool use and system integration<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Action Groups define what the agent can do beyond generating text. Each action group maps to an <\/span><a href=\"https:\/\/aws.amazon.com\/lambda\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">AWS Lambda<\/span><\/a><span style=\"font-weight: 400;\"> function that the agent can invoke when it determines a tool call is appropriate. Common uses include querying databases, calling internal APIs, reading from <\/span><a href=\"https:\/\/aws.amazon.com\/s3\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon S3<\/span><\/a><span style=\"font-weight: 400;\">, triggering downstream workflows, or updating records in CRM and ERP systems. The agent decides when and how to call these tools based on the user&#8217;s request and its own reasoning.<\/span><\/p>\n<h3><b>Step 4: Connect a Knowledge Base for RAG<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">If the agent needs to answer questions about internal documents, policies, product catalogs, or domain-specific data, attach a <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/knowledge-bases\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Bedrock Knowledge Base<\/span><\/a><span style=\"font-weight: 400;\">. Point the Knowledge Base at an <\/span><a href=\"https:\/\/aws.amazon.com\/s3\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">S3<\/span><\/a><span style=\"font-weight: 400;\"> bucket containing your documents and let Bedrock handle the embedding, vector indexing via <\/span><a href=\"https:\/\/aws.amazon.com\/opensearch-service\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon OpenSearch Serverless<\/span><\/a><span style=\"font-weight: 400;\">, and retrieval pipeline automatically. The agent will query the Knowledge Base when it needs context it cannot derive from the prompt alone.<\/span><\/p>\n<h3><b>Step 5: Apply Bedrock Guardrails<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Before testing with real users, configure <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/guardrails\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon Bedrock Guardrails<\/span><\/a><span style=\"font-weight: 400;\">. Guardrails let you block disallowed topics, filter personally identifiable information, set word-level content filters, and prevent prompt injection attacks. Every tool call the agent makes passes through the guardrail policy in real time. Setting guardrails early is far less costly than adding them after an incident in production.<\/span><\/p>\n<h3><b>Step 6: Set up observability with AgentCore and CloudWatch<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Visibility into agent behavior is not optional for production deployments. Enable <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/agentcore\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon Bedrock AgentCore Observability<\/span><\/a><span style=\"font-weight: 400;\">, which streams traces of every reasoning step, tool call, and model response into <\/span><a href=\"https:\/\/aws.amazon.com\/cloudwatch\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon CloudWatch<\/span><\/a><span style=\"font-weight: 400;\">. The GenAI Observability dashboard shows token usage, tool selection patterns, end-to-end latency, and guardrail hits in near real time. This data is what allows teams to debug issues, optimize costs, and demonstrate compliance.<\/span><\/p>\n<h3><b>Step 7: Deploy and test with real workloads<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Use the Bedrock Agent test console to validate the agent&#8217;s behavior against representative inputs before connecting it to any live system. Evaluate both happy-path scenarios and edge cases, including inputs designed to confuse the agent or bypass guardrails. AWS recommends setting up continuous evaluations from the start so that scoring of correctness, faithfulness, and goal-success rate is automated rather than periodic.<\/span><\/p>\n<h2><b>Adding Multi-Agent Collaboration for Complex Workflows<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Single agents handle a great deal, but some workflows are too large or too varied for one agent to manage well. <\/span><a href=\"https:\/\/aws.amazon.com\/blogs\/aws\/introducing-multi-agent-collaboration-capability-for-amazon-bedrock\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon Bedrock&#8217;s multi-agent collaboration capability<\/span><\/a><span style=\"font-weight: 400;\">, now generally available since March 2025, lets you build systems where a supervisor agent coordinates specialized subagents, each focused on a distinct domain or task type.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-30606\" src=\"http:\/\/renovacloud.com\/wp-content\/uploads\/2026\/05\/image1-3.png\" alt=\"Devs coordinating complex technical presentation.\u00a0\" width=\"1024\" height=\"765\" \/><\/p>\n<p><span style=\"font-weight: 400;\">In a typical multi-agent setup, the supervisor receives the user&#8217;s request, breaks it into subtasks, delegates each subtask to the most appropriate subagent, collects the results, and synthesizes a final response. Subagents run in parallel where their tasks are independent, which significantly reduces overall latency for complex jobs.<\/span><\/p>\n<p><b><i>An example from production:<\/i><\/b><i><span style=\"font-weight: 400;\"> Northwestern Mutual deployed a multi-agent orchestration framework for internal developer support on AWS. By routing queries to specialized subagents rather than a single generalist agent, <\/span><\/i><a href=\"https:\/\/www.infoq.com\/news\/2025\/01\/aws-bedrock-multi-agent-ai\/\" rel=\"noopener\"><i><span style=\"font-weight: 400;\">response times dropped from hours to minutes<\/span><\/i><\/a><i><span style=\"font-weight: 400;\"> and support engineers were freed to focus on genuinely complex problems.<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">Building a multi-agent system starts with the same steps as a single-agent implementation. Each subagent gets its own instructions, action groups, and optional Knowledge Base. The supervisor agent is then configured with references to each subagent. Amazon Bedrock handles the inter-agent communication protocol automatically.<\/span><\/p>\n<h2><b>Choosing the Right AWS Framework for Your Team<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">AWS provides two primary paths for building agents, and the right choice depends on how much control your team needs over the orchestration logic.<\/span><\/p>\n<h3><b>Amazon Bedrock Agents (Managed Orchestration)<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">This is the fastest path from idea to deployment. Bedrock manages the entire reasoning loop, prompt templates, session context, and tool invocation. You define the agent&#8217;s instructions, action groups, and knowledge bases through the console or via infrastructure-as-code using <\/span><a href=\"https:\/\/aws.amazon.com\/cloudformation\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">AWS CloudFormation<\/span><\/a><span style=\"font-weight: 400;\"> or the <\/span><a href=\"https:\/\/aws.amazon.com\/cdk\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">AWS Cloud Development Kit<\/span><\/a><span style=\"font-weight: 400;\">. Teams without deep ML engineering backgrounds can build capable, production-ready agents using this path.<\/span><\/p>\n<h3><b>AgentCore with Open-Source Frameworks<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">For teams that want to use familiar frameworks such as LangChain, LangGraph, CrewAI, LlamaIndex, or Amazon&#8217;s own Strands Agents SDK, <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/agentcore\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon Bedrock AgentCore<\/span><\/a><span style=\"font-weight: 400;\"> provides the production runtime without locking you into a proprietary orchestration model. AgentCore handles memory management, identity controls (including user-scoped memory so different users get appropriately isolated context), and security, while your framework handles orchestration logic. This approach gives experienced ML teams the flexibility they want alongside the enterprise reliability of AWS infrastructure.<\/span><\/p>\n<h2><b>Cost Management When Running AI Agents at Scale<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Agentic workflows consume tokens differently from single-shot model invocations. Every step in the agent&#8217;s reasoning loop generates tokens, including the initial prompt, the tool call decision, the tool result, the follow-up reasoning, and the final response. On a complex task, an agent might complete five to ten reasoning steps, each producing token usage. Building cost monitoring into the architecture before scale arrives is far easier than retrofitting it afterward.<\/span><\/p>\n<p><b><i>Practical tip:<\/i><\/b><i><span style=\"font-weight: 400;\"> Tag every Amazon Bedrock request with project, environment, and team identifiers using <\/span><\/i><a href=\"https:\/\/aws.amazon.com\/aws-cost-management\/aws-cost-explorer\/\" rel=\"noopener\"><i><span style=\"font-weight: 400;\">AWS Cost Explorer<\/span><\/i><\/a><i><span style=\"font-weight: 400;\"> tags. This gives finance and engineering teams per-agent cost visibility without building custom tracking infrastructure. AgentCore Observability also surfaces token counts per session in the CloudWatch dashboard, making cost attribution straightforward at the task level.<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">For high-volume workloads, <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/pricing\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon Bedrock Provisioned Throughput<\/span><\/a><span style=\"font-weight: 400;\"> lets you reserve model capacity at a lower per-token rate than on-demand. If your agent workload is predictable in volume, provisioned throughput typically reduces inference costs significantly compared to on-demand pricing at scale.<\/span><\/p>\n<h2><b>From Proof of Concept to Production on AWS<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Most agent projects stall at the proof-of-concept stage not because the agent cannot do the task, but because the path from prototype to production is not planned.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A working demo in the Bedrock console needs several additional layers before it is enterprise-ready: proper IAM boundaries, guardrail policies, observability wiring, load testing, and a rollback plan if agent behavior degrades after a model update.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AWS provides a structured path for this through <\/span><a href=\"https:\/\/aws.amazon.com\/bedrock\/agentcore\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">AgentCore<\/span><\/a><span style=\"font-weight: 400;\">, which was designed specifically to address the gap between a prototype that works and a system that runs reliably at scale for thousands of concurrent users. Paired with <\/span><a href=\"https:\/\/aws.amazon.com\/step-functions\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">AWS Step Functions<\/span><\/a><span style=\"font-weight: 400;\"> for workflow orchestration and <\/span><a href=\"https:\/\/aws.amazon.com\/cloudwatch\/\" rel=\"noopener\"><span style=\"font-weight: 400;\">Amazon CloudWatch<\/span><\/a><span style=\"font-weight: 400;\"> for continuous monitoring, the production stack for AI agents on AWS is now mature enough to support regulated industry use cases.<\/span><\/p>\n<p><a href=\"https:\/\/datagrid.com\/blog\/ai-agent-statistics\" rel=\"noopener\"><span style=\"font-weight: 400;\">Gartner projects that by 2028, 33% of enterprise software applications will include agentic AI<\/span><\/a><span style=\"font-weight: 400;\">, up from less than 1% in 2024. Organizations that invest in building production-grade agent infrastructure now will be positioned to extend and scale those capabilities far faster than those starting from scratch in two years.<\/span><\/p>\n<h2><b>Renova Cloud: AWS AI Agent Implementation Partner<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Renova Cloud is an AWS Premier Partner based in Vietnam, with a certified engineering team that has designed and deployed generative AI and agentic solutions on AWS across Southeast Asia.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Our engineers work across the full Bedrock stack, covering knowledge base architecture, action group design, multi-agent orchestration, guardrails, and CloudWatch observability.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We hold the AWS AI Services Competency, reflecting proven capability in production AI deployments. If your organization is planning to implement AI agents on AWS and wants experienced guidance from architecture through to production, reach out to our team.<\/span><\/p>\n<p><a href=\"https:\/\/renovacloud.com\/en\/contact\/\"><span style=\"font-weight: 400;\">Talk to Our Team \u2192<\/span><\/a><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI agents have moved from research papers into production systems faster than almost any technology in recent memory. This guide covers how to implement AI agents on AWS from architecture selection through to monitoring in a live environment. What an AI Agent Actually Does Before covering implementation, it helps to be precise about what separates [&#8230;]\n","protected":false},"author":18,"featured_media":30602,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[862,13,951],"tags":[],"class_list":["post-30599","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-generative-ai","category-ai-ml","category-aws-service"],"_links":{"self":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts\/30599","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/comments?post=30599"}],"version-history":[{"count":2,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts\/30599\/revisions"}],"predecessor-version":[{"id":30609,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/posts\/30599\/revisions\/30609"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/media\/30602"}],"wp:attachment":[{"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/media?parent=30599"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/categories?post=30599"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/renovacloud.com\/en\/wp-json\/wp\/v2\/tags?post=30599"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}