top of page
andreaveress

Perform 2023 Guide: Organizations mine efficiencies with automation, causal AI



At Perform 2023, the theme is “IT automation.” The conference and this guide explore pushing the limits of modern cloud observability. With causal AI, observability, security, and business data converge, enabling workflow automation and exploratory analytics without boundaries.


Digital transformation shows no signs of slowing down. As a result, the complexity of modern multicloud ecosystems continues to increase. Data proliferation—as well as a growing need for data analysis—has accelerated.


Increasingly, organizations are turning to modern observability platforms to address the complexity of, and gain visibility into, cloud environments. They now use modern observability to monitor expanding cloud environments in order to operate more efficiently, innovate faster and more securely, and to deliver consistently better business results.


Further, automation has become a core strategy as organizations migrate to and operate in the cloud. More than 70% of respondents to a recent McKinsey survey now consider IT automation to be a strategic component of their digital transformation strategies.


According to recent Dynatrace data, 59% of CIOs say the increasing complexity of their technology stack could soon overload their teams without a more automated approach to IT operations.


At the same time, according to a McKinsey report, while it’s possible to automate at least one-quarter of their organizations’ tasks over the next five years, less than 20% of respondents have already scaled automation technologies throughout multiple parts of the business.


Moving to greater automation is a key goal for 2023, as organizations face economic uncertainty and need to stay innovative while remaining cost-efficient.


In what follows, we explore some key cloud observability trends in 2023, such as workflow automation and exploratory analytics. We also explore the convergence of observability, security and business data; the role of causal AI in observability; data visualization, and more. These are just some of the topics being showcased at Perform 2023 in Las Vegas.


Perform 2023 news

At Perform 2023 in Las Vegas, the headliner theme is IT automation. Dynatrace has announced a series of platform enhancements that enable teams to automate processes and gain data insight, all while teams share and execute on this data more easily. We’ll continue to post news here as it happens!


The platform has been updated with a variety of capabilities to enable teams to discover data insights more easily. See the resources below for more information.



At Perform 2023, Dynatrace CEO Rick McConnell explains why modern observability is no longer optional—it’s mandatory.


Dynatrace CMO Mike Maciag shared the stage with CPO Steve Tack in a Perform 2023 keynote about observability and digital transformation.


The three pillars of observability converge on the Grail data lakehouse to unleash a new era of collaborative analytics and automation.


Dynatrace announced today that it is extending its platform’s Grail™ data lakehouse beyond logs and business events to deliver new support for metrics, distributed traces, and multicloud topology and dependencies.


This new Dynatrace® platform technology features an intuitive interface and no-code and low-code toolset and leverages Davis® causal AI to empower teams to extend answer-driven automation across boundless BizDevSecOps workflows.


This new Dynatrace® platform technology empowers customers and partners with an easy-to-use, low-code approach to create custom, compliant, and intelligent data-driven apps for their IT, development, security, and business teams.


“DevSecOps Lifecycle Coverage with Snyk” app, developed with the new Dynatrace AppEngine, will enable teams to mitigate security risks across pre-production and production environments, including runtime vulnerability detection, blocking, and remediation.


“Carbon Impact” app demonstrates the boundless capabilities of the Dynatrace AppEngine and provides customers with precise answers about their cloud ecosystems’ carbon footprint and ways to reduce it.


From data lakehouse to an analytics platform

Traditionally, to gain true business insight, organizations had to make tradeoffs between accessing quality, real-time data and factors such as data storage costs. IT pros need a data and analytics platform that doesn’t require sacrifices among speed, scale, and cost.


Therefore, many organizations turn to a data lakehouse, which combines the flexibility and cost-efficiency of a data lake with the contextual and high-speed querying capabilities of a data warehouse. A data lakehouse eliminates team silos and delivers faster, high-quality insights.


But today’s organizations must go a step further to maximize the value of logs and observability data. Grail, the Dynatrace causational data lakehouse with a massively parallel processing analytics engine, unites observability, security, and business data from multicloud and cloud-native environments while retaining the data’s context to deliver precise answers in real time. Further, with Grail, Dynatrace automatically stores all an organization’s data with causational context.


For more information on how a data lakehouse powered by software intelligence can help your organization quell cloud complexity, create operational efficiencies, and deliver better business insights, view the resources below.



An analytics platform with exploratory analytics capabilities enables teams to make data-backed decisions that drive better business results.


Further extending our platform’s analytics capabilities, we’re increasing Grail’s capabilities by adding new data types and unlocking support for graph analytics. Learn more.


Discover everything you need to know about Grail, the Dynatrace causational data lakehouse with a massively parallel processing analytics engine.


While organizations often choose between data lakes and data warehouses, a data lakehouse offers the best of both. Discover the benefits of a data lakehouse, how it works, and more.


Discover how a data and analytics approach to software intelligence unifies observability and security data while generating real-time insights.


Log management and analytics are key to any company’s observability strategy. See how Dynatrace Log Management and Analytics enables any analysis at any time with Grail technology.


While logs create yet another silo for IT managers, they also provide a goldmine of data. However, turning those logs into meaningful insights requires a data lakehouse.


Organizations need a data architecture that can cost-efficiently store data and enable IT pros to access it in real time and with proper context. That’s where a data lakehouse can help.


As data volumes continue to explode, organizations need an effective way to store, contextualize, and query data to get immediate insights and drive automation.


The benefits of IT automation for development, security, and operations teams

For organizations to stay innovative while remaining cost-efficient, building automation into IT processes and workflows is key. With the complexity of today’s hybrid and multicloud ecosystems, manual security monitoring and observability efforts are ineffective and costly.


Not only are IT teams at risk of becoming overwhelmed, but vital data can also go unidentified, productivity can be lost, and issues can go unresolved. There are simply too many disparate sources of information to successfully manage and monitor without automation.


By automating workflows, teams throughout the organization can eliminate manual processes and improve outcomes. For example, development teams can use automation to increase efficiency in the software development lifecycle. Security teams can identify vulnerabilities and automate remediation. Operations teams can monitor user experience in cloud infrastructure and automatically provision resources to optimize digital customer experience. The ability for an organization to gain observability of all its data in context enables teams to identify areas where automated processes can enhance software security, increase cost-effectiveness, and improve customer experiences.


To learn more about IT automation, check out the following resources.



Organizations can ingest, process, and analyze large volumes and varieties of data in one platform. Learn more.


Dynatrace AutomationEngine unlocks the power to combine observability, security, and business data with causal AI to easily automate BizDevSecOps workflows at enterprise scale.


Learn more about this new interactive capability that allows IT, development, security, and business users to collaborate using code, text, and rich media to build, evaluate, and share insights for exploratory analytics.


As cloud complexity and data silos rise, CIOs are struggling to keep up with the speed and scale of tech environments. Learn more.


Discover everything you need to know about Grail, the core platform technology that unifies data while retaining its context to deliver fast, scalable, and cost-effective AI-powered answers and automation.


What is IT automation? Learn how it works, its potential pros and cons, and why taking an observability platform approach is essential.


AIOps tools can help you streamline operations. But teams need automatic and intelligent observability to realize true AIOps value at scale.


Learn the best practices for developing an AIOps strategy that drives efficiency, innovation, and better business outcomes with this eBook.


Creating new efficiencies and capabilities with custom apps for all audiences

In every industry, organizations are looking for exploratory analysis of their IT and business data to help make strategic decisions. As a result, teams from IT to security and sales are adopting more cloud-based technologies and point solutions to provide answers to critical business questions. In fact, Gartner estimates that, by 2025, more than 95% of new digital workloads will be deployed on cloud-native platforms—up from 30% in 2021.


But as organizations adopt more technologies, getting timely, concise answers that benefit a wide range of stakeholders is getting harder, not easier. Indeed, according to a Dun & Bradstreet and Forrester Consulting report, 72% of organizations find managing multiple systems across regions and technology silos challenging. Further, the biggest challenge these organizations face is managing data and sharing insights that drive decisions across organizational silos.


What’s more, organizations are no longer concerned only about application performance and sales numbers. Increasingly, more esoteric questions, such as sustainability and social justice, are driving business decisions. In another Gartner study, 39% of CEOs said taking an active social justice stance is good for business. Similarly, 45% of CEOs said climate change mitigation is having a significant effect on their businesses.


With these kinds of wider concerns, organizations need a way to centralize their data with automated observability to run a customized analysis. This analysis takes the form of custom apps that use contextualized data from varied sources to generate unique data insights. Teams can tailor these custom apps with exploratory analysis, flexible dashboards, and reports that provide answers to broader questions that affect strategic decisions.


Learn more about what kinds of business questions custom apps and exploratory analytics can answer from the following resources.



See how AppEngine enables organizations to maximize their data insights by creating low-code custom apps for exploratory analytics.


Dynatrace® AppEngine enables customers to create custom, compliant, and intelligent data-driven apps. Learn more.


The new Carbon Impact app, developed using Dynatrace® AppEngine, tracks carbon emissions across hybrid and multicloud environments, delivering analytics and recommendations that support carbon-reduction initiatives.


Many organizations are undergoing a digital transformation. DevOps metrics and digital experience data are critical to this. Learn more.


Real-time vulnerability management

With increasingly complex environments and faster software delivery cycles, organizations are turning to automated workflows to reduce complexity and deliver high-performing software faster.


However, according to recent research, keeping applications secure in modern hybrid and multicloud systems is an ongoing challenge. In the 2023 Global CIO Report, 34% of CIOs say they must sacrifice code security given the pressure for faster innovation. Additionally, as IT environments become more complex using open source code, legacy components, and traditional monitoring tools that are unable to scale, there are more opportunities for vulnerabilities to enter the software delivery lifecycle. Without adequate vulnerability scanning, applications can suffer.


An organization’s ability to rapidly detect, identify, remediate, and prevent future vulnerabilities is crucial to maintaining high-performing applications and preventing large-scale security incidents like Log4Shell. Traditional, manual security approaches are no longer enough; DevOps and security teams require a more automated, intelligent approach to their DevSecOps practices. Organizations are turning to real-time vulnerability management solutions powered by AI to keep up with the complexity of modern multicloud environments.


By adopting a solution that automatically scans for vulnerabilities in both pre-production and runtime environments, teams are aware of security issues sooner, resulting in more time for remediation and prevention of future incidents.


To learn more about real-time vulnerability management, explore the following resources:


DevSecOps Lifecycle Coverage with Snyk enables teams to mitigate security risks across pre-production and production environments, including runtime vulnerability detection, blocking, and remediation.


We asked 1,300 CIOs and senior DevOps managers about the challenges they face. Here is what they reported.


We asked 1,300 CISOs about the state of application security and DevSecOps in their organizations. Here is what they reported.


Maintaining a secure software delivery pipeline requires mitigating risk. But managing the span of vulnerabilities in your environment can be challenging. Here is what to look for in a vulnerability management solution.


Runtime vulnerability management is key to ensuring secure IT operations. But without visibility into runtime threats, CISOs are having trouble managing risks.



25 views

Comments


bottom of page