Welcome!

VictorOps Helps Your IT/DevOps Team Solve Problems Faster

VictorOps Blog

Subscribe to VictorOps Blog: eMailAlertsEmail Alerts
Get VictorOps Blog via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Continuous Integration

Blog Feed Post

Hidden Lessons of Incident Management By @JasonHand | @DevOpsSummit #DevOps

One of the most common early goals of implementing DevOps principles is a deep understanding of our systems in a stable state

Hidden Lessons of Incident Management
By Jason Hand

One of the most common early goals of implementing DevOps principles is a deep understanding of our systems in a stable state. However, this objective is not a “once and done” effort. It is important to continuously circle back in some form (a feedback loop) as changes are introduced. It’s an ongoing exercise for an entire organization as our processes, tools, and teams improve continuously over time.

In many cases during these beginning stages of DevOps transformations, agreeing on a starting point is where much of our time is spent. An unfortunate consequence of this is that without confidence in understanding where to start, oftentimes we never start at all. Analysis paralysis is a very real thing, especially for big organizational changes, and those who are typically risk-averse unfortunately fall victim to this far too easily.

15381354616_663a8999b9_z

Be wary if R.O.I. is creeping into the conversation regarding adopting tools and processes of DevOps. This is an immediate indication that the collective mindset of management, or at least the decision-makers, have not yet placed continuous improvement and learning as the highest priority. The decision-making ways of the past will not empower an organization to adapt & thrive in the competitive nature of today’s business.

(Should you disagree with this position, the rest of this article will provide you no real value.)

“Establishing a deep understanding of our current systems to formulate a baseline and feedback loop is the foundation. From there, we improve.”

Introducing a level of confidence (or lack of) in current methods of software delivery & maintenance as measured by anomalies in development and operations efforts helps shed light on where to start. By moving focus towards a deeper understanding of our infrastructure and codebase, a starting point begins to appear. The paralysis of decision-making begins to ease and the “managing from a distance” behaviors, such as R.O.I. mentions, start to provide no meaningful value.

6218038832_cf5be5bcfe_z

From there, small incremental goals or what are known as “Target Conditions” can be set to begin the process of improvement. This focus on improvement is the key to unlocking so many of the concepts brought up in DevOps conversations. Continuous Integration and Continuous Delivery are possible only as results of a focus on understanding current conditions while placing a company-wide effort on striving towards Continuous Improvement.

Thus, a good starting point in any organization’s efforts to dip their first toe in the DevOps pool is with on-call, incident management, and monitoring improvements. Understanding your organization’s existing methods of identifying and responding to abnormalities is one of the easiest and most stimulating first steps.

The immediate benefits of modern on-call practices are easy to identify and agree on:

– Anomalies are detected in real-time.

– The correct operators and engineers are alerted to actionable issues as quickly as possible.

– Critical context on what’s taking place gives responders exactly what they need in that moment, shaving time and cognitive load.

– A collaborative space to discuss context, diagnosis, and efforts towards repair, means reduced Time to Repair and an increased situational awareness across teams and the organization of what is happening and the “state of systems”.

However, what about the benefits that aren’t obvious or immediate? What else is gained simply by improving the way we monitor and manage on-call and incident management?

Opportunity To Learn
Waiting to identify or be notified of a problem until much later makes it difficult to learn. The ability to understand the contributing factors becomes increasingly problematic the longer time passes. The trail to identifying everything involved with a disruption in service begins to go cold as operators, engineers, and the systems themselves move on to new tasks. Because of this, it becomes very difficult to learn and any opportunity for improvement is missed.

16235812517_090424ea31_z

Snowball effect
What may seem like a small or non-critical problem can quickly become a large one if left alone. As time ticks away, seemingly insignificant issues accumulate and grow into large, complicated or complex problems that have dangerous negative impacts and are much more difficult to diagnose and repair. In some situations, this can happen very quickly and a minor incident may become a “Sev-1″ outage in no time at all.

Stay on Track
Many of us follow Agile Development principles and operate in short development cycles. Shortened sprints are designed and planned in such a way that disruptions and context switching can be very detrimental to our efficiency. However, sprint planning is developed in a way to establish targets and goals, with the caveat of being able to quickly change course when and if the need arises. By responding right away to disruptions, we have the greatest chance of achieving those goals.

Waiting to deal with a problem until you’ve finished the code or configuration you are currently working on may very well result in the realization that those efforts (and code) were wasted. The feedback about your system (in the form of a problem) is likely full of information indicating that the piece of code you are writing won’t work under the current conditions of your system. Or worse, that it doesn’t provide value to the service you are building.

Leveraging monitoring, alerting, and incident management means having a pulse on your systems. That feedback loop is essential to staying on track for the greater good of the services you are engineering, even if that means changing course and activities quickly and often. That is – after all – what Agile and DevOps are determined to provide you.

Consistency
The quality of your service is extremely important to not only your end users, but the business as a whole. The service you provide IS the brand of the company and not placing quality of service as a top priority can mean extreme negative consequences. System resiliency and reliability as a means towards “high availability” is paramount in establishing credibility. Consumers of your product have very little tolerance for regular or lengthy outages. Communicating to your end users that quality of service is extremely important to you, yet not responding to problems as they occur is saying one thing and doing another.

The message you are sending is inconsistent at best and indicates trouble within the organization (likely at the management level) that priorities are not in alignment. Being consistent is one of the most important things to focus on for any organization. Your customers are paying attention to that consistency. Are you?

Downstream consequences
Many of us are aware of the benefits of loosely-coupled and independent processes or systems. The arguments for microservices architecture are hard to ignore. Its approach means a degraded performance of one service can have little-to-no impact on others. If there is a problem in one small area of the system, it doesn’t have a negative consequence to the system as a whole.

8695082512_fdb2b66baa_z

However, unless your entire service is part of a distributed microservice ecosystem, services are in fact, closely-coupled, and a problem in one area can quickly lead to problems elsewhere. The idea of a rarely used and non value-adding part of your infrastructure or codebase crumbling your entire service is frustrating for some, but something that keeps many in Operations roles from sleeping well at night. Not being aware of or alerted to an issue may mean catastrophic failure when a small and less significant service takes out a large and critical one.

The approach you and your organization take on managing incidents and those tasked with the responsibility of responding to them is a key indicator on where you value continuous improvement. If the culture of your team or company does not place a high value on learning and striving for improvements in processes, tools, and individuals in a continuous manner, then any efforts of rolling out DevOps will fail. This is why the culture piece of DevOps conversations comes up so frequently, and why it frustrates many who strongly hold on to “old-view” methods of managing.

Continuous improvement is at the heart of it all. Empathizing with our end users and those involved in engineering and maintaining our systems means that nothing is ever “done” or “good enough”. Everything must continuously get better. Establishing a deep understanding of your systems provides insight on where to focus efforts of improvement.

Failing to place understanding and learning as the highest priority means imminent failure of the organization and the products or services it provides.

The post Hidden Lessons of Incident Management appeared first on VictorOps.

Read the original blog entry...

More Stories By VictorOps Blog

VictorOps is making on-call suck less with the only collaborative alert management platform on the market.

With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.