DoiT Cloud Intelligence™ DoiT Cloud Intelligence™ logo
Back to Homepage
English
  • Deutsch
  • Español
  • Français
  • 日本語
Subscribe to Updates

DoiT Cloud Intelligence™

Powered by Technology, Perfected by People.

Labels

  • All Posts
  • Fix
  • Announcement
  • Improvement
  • home page

Jump to Month

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • June 2020
  • May 2020
  • April 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
Announcement
yesterday

Enhance CloudFlow Flows with AWS and GCP CLI Support

When a FinOps or CloudOps process needs to take action, the last mile often happens outside your tooling: someone opens a terminal, runs a one-off CLI command, pastes output into a ticket, and hopes the next person follows the same steps. That breaks auditability and makes repeatable governance hard, especially when you’re promoting artifacts between environments or responding to an incident under time pressure.

CloudFlow now includes a CLI node, so you can run AWS or GCP commands as a first-class step inside a flow, alongside the rest of your automation. This lets you encode terminal-only procedures as an automated, repeatable workflow with a run history, rather than relying on tribal knowledge and manual execution.

For example, a storage expansion workflow can attach an existing EBS volume to an EC2 instance using the same CLI command you already know, but executed as part of the flow. Under the hood, the CLI action maps to the same API sequence you’d expect (validation, state checks, then the attach call), which makes the operation predictable and easier to reason about when troubleshooting.

Similarly, release or data movement workflows that “S3 sync” become explicit, repeatable building blocks. In S3, “move” is actually a copy-and-delete operation, and encoding that once in a flow reduces accidental deviations between operators and environments.

Avatar of authorVadim Solovey
Improvement
yesterday

Disable Individual Steps in CloudFlow

Previously, if you wanted to temporarily skip a step in your CloudFlow — for debugging, testing a subset of your workflow, or iterating on a design — you had to delete the node and recreate it later. This meant losing your configuration and having to rewire transitions. You can now disable individual steps in your CloudFlows without deleting them by right-clicking any step in your CloudFlow and selecting "Disable Step" to toggle it off. Disabled steps in the flow are:

  • Visually dimmed so you can see at a glance which steps are active
  • Skipped during execution — downstream steps continue to run normally
  • Excluded from validation — errors on disabled steps won't block publishing
  • Hidden from reference pickers — other steps can't accidentally reference a disabled step

Re-enable a step at any time with a single click.

Smart reference handling

If other steps reference the one you're disabling, CloudFlow will display a confirmation dialog. If you proceed, downstream references are automatically cleaned up, and affected steps are flagged so you know exactly what to fix.

Run history

Disabled steps appear with a "Skipped" status in run history, giving you a complete picture of every execution — including what was intentionally bypassed.

Note: Trigger and branch nodes cannot be disabled, as they are structural elements of the flow.

Avatar of authorVadim Solovey
Announcement
2 days ago

Build CloudFlow Flows Without Hand-Wiring Cloud API Calls

If you’ve ever tried to automate a real FinOps workflow, you’ve felt the sprawl: thousands of AWS and GCP APIs, inconsistent parameters, and “just one more edge case” before it’s safe to run at scale. The result is usually the same story: a good idea stalls because stitching it together takes longer than the savings it’s meant to unlock.

CloudFlow’s Agent Builder is designed for that moment: when you know the outcome you want (for example, investigate a cost spike and recommend the next action), but you don’t want to build and maintain a custom integration for every system involved.

Agent Builder provides a way to create and manage flows in CloudFlow so you can move from “we should automate this” to a repeatable workflow that can be triggered and governed like any other CloudFlow automation. Instead of designing every branch of logic around individual APIs, you define the flow in natural language once.

Image

In practice, this is meant to shift your work up a level:

  • from integrating every AWS/GCP endpoint yourself,
  • to packaging a reusable “investigate and act” capability you can drop into multiple flows (alerts, ticket enrichment, scheduled checks, remediation handoffs).

Getting Started with Agent Builder

  • Create or open an existing flow and navigate to the Agent Builder page.
  • Ask Agent Build to create a new flow (for example: cost anomaly triage, tagging governance checks, owner resolution).
  • Run it in a controlled scope first (a single account, a single project, or a narrow set of services), then expand.
Avatar of authorVadim Solovey
Announcement
3 days ago

Keep CloudFlows deterministic by waiting for cloud resources to be ready

In real workflows, cloud APIs often return before the resource is actually usable. For example, your flow might create an instance and immediately try to attach storage or run a configuration step, only to fail because the instance is still booting or the resource state has not converged.

AWS has native waiters for some APIs, but they cover only a small subset of services and operations. In Google Cloud, you generally don’t get an equivalent out of the box, which pushes teams toward brittle sleep steps and custom retry loops.

You can now add a “waiter” to cloud API actions so CloudFlow pauses until the resource reaches the state you need, then continues automatically. This reduces fragile retry logic and makes multi-step automations behave consistently across AWS and Google Cloud.

Image

A waiter is tied to the specific action you’re running and waits on a defined target state before the next step runs. You enable the waiter on the action, choose what to wait for, and optionally tune the polling behavior for faster or more conservative checks. If the waiter needs extra inputs to verify readiness, you provide those parameters as part of the waiter configuration.

To get started, review the "Add a Waiter" article.

Avatar of authorVadim Solovey
Improvement
3 days ago

Commitment Manager now supports Azure MACC tracking

If you have a Microsoft Azure Consumption Commitment (MACC), you can track drawdown progress directly in DoiT Cloud Intelligence with the same daily-updated visualizations, service-level breakdowns, and forecasting already available for AWS EDPs and Google Cloud spend commitments.

After adding your Azure MACC in Commitment Manager, you’ll be able to:

  • See your spend-to-date, remaining balance, and a forward forecast against your MACC commitment value, updated daily.
  • Understand which Azure services are generating eligible spend and contributing to MACC decrement, with month-by-month totals across your full contract period.
  • Track Marketplace spend against your contract limit with a month-by-month breakdown and drill down into which Marketplace services contributed.
  • View all your cloud commitments in one place if you also have active AWS or Google Cloud spend commitments.

To get started, open Commitment Manager in DoiT Cloud Intelligence. For configuration guidance, see the docs.

Image


Avatar of authorMatan Bordo
6 days ago

Close the Loop on Cost Anomalies with Re-Notifications

Cost anomalies do not always stop after the first notification. In real environments, an anomaly can persist across hours or days, get buried in Slack or Teams, or grow significantly after initial detection. FinOps and CloudOps teams need a follow-up signal when cost risk remains unresolved or escalates.

In DoiT Cloud Intelligence, cost anomaly notifications are no longer limited the initial detection.

You can now configure re-notifications so you’re notified again when:

  • An anomaly remains active after a specified period of time (similar to a reminder or snooze), or
  • The anomaly impact increases, either by a fixed dollar amount, a percentage, or a combination of amount and percentage.

Image

Previously, notifications were triggered only when a threshold was first crossed. Now, notifications can continue if cost risk persists or escalates, helping teams stay aware of ongoing or worsening spend issues.

Re-notifications can be configured within your existing anomaly notification settings and works with current Slack, Teams, and email delivery options. To learn more about it, consult our Help docs or raise a support ticket to speak to a DoiT expert.

Avatar of authorJosh Bonner
Announcement
a week ago

Reuse shared data across CloudFlow automations with Datastore tables

Cloud automations rarely live in isolation. One flow detects an issue, another enriches context, a third notifies the right team. Without a shared source of truth, you end up duplicating logic or hardcoding mappings that drift over time.

The Datastore node provides your CloudFlow automations with a shared, managed place to store and query structured data, enabling multiple flows to reference the same tables for enrichment, routing, and state. Think of Datastore as a lightweight database you can use directly from flows.

A common pattern is a tag ownership table:

  • You maintain a table that maps tags (or tag patterns) to an owner, team, cost center, escalation channel, and metadata.
  • Any flow can query it to resolve “who owns this?” consistently.

Example: an anomaly flow spots a sudden spike on an untagged workload. It looks up the resource’s tags, queries the ownership table, and then routes the alert to the correct Slack channel. A separate remediation flow uses the same table to decide who can approve a change and where to open a ticket.

Image

In the Datastore node, you can:

  • Get records using filters (including values from previous nodes)
  • Insert records for new rows, including batch inserts
  • Upsert records to keep a table current without duplicates by using a unique key column

Supported column types include Text, Integer, Numeric, Boolean, Date, Timestamp, and JSON.

How to get started

Create a Datastore table for shared mappings (for example, tag-to-owner), then reference it from any flow via a Datastore node query. Use Upsert when you want scheduled flows to keep the table continuously in sync from an authoritative source. 

Avatar of authorVadim Solovey
Announcement
a week ago

Pause CloudFlow runs with a Sleep node for safer automation

Some workflows need a deliberate delay to avoid API throttling or to wait for changes to propagate. For example, if you’re looping through cloud resources or applying IAM updates, a short pause can prevent rate-limit errors and reduce noisy retries.

A common operational pattern is also dev environment scheduling: start in the morning, run all day, then shut down in the evening. Sleep lets you build this into a single flow without external schedulers.

Image

What’s new

You can add a Sleep node to pause a CloudFlow run for a configurable duration, then automatically resume execution. This helps you:

  • Space out provider API calls to reduce throttling risk
  • Add cooldown periods before re-checking or notifying
  • Wait for eventual consistency (for example, new resources or IAM changes becoming effective)
  • Orchestrate time-based actions in one flow, like dev environment start and end-of-day shutdown

How it works

When a run reaches the Sleep node, CloudFlow marks the run as Sleeping and pauses execution until the configured time elapses. After the duration completes, CloudFlow wakes the run and continues to the next node.

How to get started: https://help.doit.com/docs/operate/cloudflow/nodes/sleep

Avatar of authorVadim Solovey
a week ago

Navigate Cloud Intelligence™ faster than ever with reimagined navigation

Power users of DoiT Cloud Intelligence™ work across many features. From FinOps Reports to Automation tools like CloudFlow. We heard loud and clear that finding the right place quickly was friction nobody needed. The new navigation puts you in control of your own experience.

Favorites are a simple way to star the features and reports you use most so they're always one click away. Whether you're jumping into a weekly spend report, checking Insights, or reviewing cost anomalies, you no longer have to dig through menus to get there. Star once and find it instantly every time.

Image

Hover over any item in the new mega menu or search experience and click the star icon to add an item to your Favorites. You can favorite:

  • Top-level features (e.g., CloudFlow, Anomalies, Budgets, etc.)
  • Individual reports
  • Specific dashboards

Your favorite items will appear in a persistent Favorites bar just below the main navigation, giving you instant access no matter where you are in the product. Alongside Favorites, the console now also tracks your recently visited pages, so even items you haven't starred are easy to get back to.

Avatar of authorBrad Rutland
Announcement
a week ago

Keep CloudFlow maintainable by calling shared logic with Sub Flow

As CloudFlows grow, the same steps often get copied into multiple workflows. That makes fixes slower and outcomes inconsistent, because you end up updating the same logic in several places.

Use-case example: you run a daily workflow that detects cost anomalies, then triggers CloudOps remediation. Instead of duplicating “ownership resolution” and “notification formatting” steps across every anomaly flow, rightsizing flow, and incident flow, you put that logic in a single subflow and call it from each parent workflow, so updates propagate everywhere.

What’s new

  • You can now use the Sub Flow node to run one CloudFlow from inside another, so shared logic lives in a single reusable flow.
  • Parent flows can pass inputs as parameters (mapped to the sub flow’s local variables), so the same sub flow works across multiple use cases.
  • Each invocation creates a child run you can open from the run history, making troubleshooting and auditing easier.

You build a reusable flow (the sub flow) and publish it. In your main workflow, add a Sub Flow node, select the flow you want to call, and provide the parameter values for that run. When the parent reaches that node, CloudFlow runs the selected flow as a child run and returns its output back to the parent so the workflow can continue.

Image

Getting started

  • Create the reusable flow you want to call and define the local variables it should accept.
  • In the parent flow, add a Sub Flow node and select the flow to call from the Flow drop-down.
  • Map parameters, then use the Sub Flow node output in downstream nodes.

Read more about subflows in CloudFlow Help Center.

Avatar of authorVadim Solovey
Advertisement