DORA Research: 2018

Overview DORA Report Questions  

Survey Questions

Responses to the following questions were used in the analysis published in the 2018 Accelerate State of DevOps Report.

Availability

For the primary application or service you work on:

  • I know what its availability actually was in the most recent period.
  • It has well-defined targets for availability (such as Service Level Agreements / Service Level Objectives) that are clearly communicated among the team and to customers.
  • My team met or exceeded our target for availability in the most recent period.
  • When we miss our availability targets, we perform improvement work and/or re-prioritize.

Burnout

  • I am indifferent or cynical about my work.
  • I feel burned out from my work.
  • I feel exhausted.
  • I feel like I am ineffective in my work.
  • My feelings about work negatively affect my life outside of work.

Climate For Learning

  • Learning is the key to improvement.
  • Learning is viewed as an investment, not an expense.
  • Once we quit learning we endanger our future.

Cloud

  • I can dynamically increase or decrease the cloud resources available for the service or product that I primarily support based on demand.
  • I can monitor or control the quantity and/or cost of cloud resources used by the service or product that I primarily support.
  • Once I have access, I can independently provision and configure the cloud resources and capabilities required for my product or service on demand without raising tickets or requiring human interaction.
  • The cloud my product or service runs on serves multiple teams and applications, with compute and infrastructure resources dynamically assigned and re-assigned based on demand.
  • The service or product that I primarily work on is designed to be accessed from a broad range of devices (e.g. smartphones, tablets, laptops) over the network without the need for proprietary plug-ins or protocols.

Continuous Delivery

  • Fast feedback on the quality and deployability of the system is available to anyone on the team.
  • My team prioritizes keeping the software deployable over working on new features.
  • Our software is in a deployable state throughout its lifecycle.
  • We can deploy our system to production, or to end users, at any time, on demand.
  • When people get feedback that the system is not deployable (such as failing builds or tests), they make fixing these issues their highest priority.

Continuous Integration

  • Automated builds and tests are executed successfully every day.
  • Code commits result in a series of automated tests being run.
  • Code commits result in an automated build of the software.

Continuous Testing

  • Automated tests are seamlessly integrated into our software delivery toolchain.
  • Developers practice test-driven development by writing unit tests before writing production code for all changes to the codebase.
  • Developers primarily create and maintain acceptance tests.
  • Developers use their own development environment to reproduce acceptance failures.
  • I can get feedback from automated tests in less than ten minutes both on my local workstation and from the CI server.
  • It is easy for developers to fix acceptance test failures.
  • Manual test activities such as exploratory testing, usability testing, and acceptance testing are performed continuously throughout the delivery process.
  • Test failures are likely to indicate a real defect.
  • Testers work alongside developers throughout the software development and delivery process.
  • We have the test data necessary to run our tests easily at every step.
  • When the automated tests pass, I am confident the software is releasable.

Customer Value

  • Customer feedback is used to inform the design of products and features.
  • Customer feedback on product and feature quality is important to my company.
  • Customer insights on quality of products and features are actively sought.
  • My organization collects customer satisfaction metrics regularly.

Database Change Management

  • All database changes are stored as scripts in version control.
  • Database changes DO NOT slow us down or cause problems when we do code deployments.
  • Everyone in our engineering org has visibility into the progress of pending database changes.
  • Production database changes are managed in the same way as production application changes.
  • When changes to the application require database changes, we always discuss them with the people responsible for the production database.

Deployment Pain

  • Code deployments are not at all disruptive.
  • Code deployments are not feared.
  • Code deployments are relatively easy and pain-free.

Generative Culture

  • Cross-functional collaboration is encouraged and rewarded.
  • Failures are treated primarily as opportunities to improve the system.
  • Information is actively sought.
  • Messengers are not punished when they deliver news of failures or other bad news.
  • New ideas are welcomed.
  • Responsibilities are shared.

Loosely Coupled Architecture

  • My team can deploy and release our product or service on demand, independently of other services it depends upon.
  • On my team, we can make large-scale changes to the design of our system without creating significant work for other teams.
  • On my team, we can make large-scale changes to the design of our system without depending on other teams to make changes in their systems.
  • To complete my own work, I don’t need to communicate and coordinate with people outside my team.
  • We can do most of our testing on demand, without requiring an integrated test environment.

Manual Work

  • What percentage of your change approval processes are done manually?
  • What percentage of your configuration management is done manually?
  • What percentage of your deployment is done manually?
  • What percentage of your testing suite is done manually?

Monitoring & Observability

  • My team has a tech solution in place for monitoring key business and systems metrics.
  • My team has a tech solution in place to report on system state as experienced by customers (e.g., "do my customers know if my system is down and have a bad experience?").
  • My team has a tech solution in place to report on the overall health of systems (e.g, Are my systems functioning? Do my systems have sufficient resources available?).
  • My team has access to tools and data which help us trace, understand, and diagnose infrastructure problems in our production environment, including interactions between services.
  • My team has tooling in place that can help us with understanding and debugging our systems in production.
  • My team has tooling in place that provides the ability to find information about things we did not previously know (e.g., we can identify "unknown unknowns").

Open Source Software

  • My organization has formal initiatives in place to expand our use of open source software.
  • My team makes extensive use of open source components, libraries, and platforms.
  • My team plans to expand our use of open source software.

Organizational Performance

For each of the following performance indicators, how well did your organization meet its goals over the past year?

Performed well above goals Performed above goals Performed slightly above goals Met goals Performed slightly below goals Performed below goals Performed well below goals N/A or I don’t know

  • Achieving our organizational and mission goals
  • Customer satisfaction
  • Increased number of customers
  • Operating efficiency
  • Other measures that demonstrate to external parties that your organization achieve intended results
  • Quality of products or services provided
  • Quantity of products or services
  • Relative market share for primary products
  • Your organization's overall performance
  • Your organization's overall profitability

Outsourcing

  • My organization outsources all or a significant portion of our application development.
  • My organization outsources all or a significant portion of our IT Operations work.
  • My organization outsources all or a significant portion of our testing and QA.

Platform As A Service

  • Most of my work takes place on a platform in the cloud (PaaS).
  • My team can deploy our application into the cloud on demand using a single step.
  • My team can perform self-service changes on demand for databases and other services required by our application which is using the PaaS.
  • My team uses libraries and infrastructure defined by the PaaS as the basis for our applications.

Retrospectives

  • My organization regularly takes the lessons learned from our post-mortems and implements changes to tooling, processes, or procedures to improve how we work.
  • My team conducts post-mortems to learn from our mistakes and failures and improve how we work.
  • My team consistently holds post-mortems (also known as learning reviews or retrospectives) following major outages.

Security

  • Information Security has input in the design of the applications that I work on.
  • Information Security has made easy-to-consume pre-approved libraries, packages and toolchains and processes for developers and IT operations.
  • Our information security team works with us throughout the development process.
  • Security review is conducted for all major features on the applications I work on.
  • Security reviews are performed throughout the development process.
  • Tests to help us discover security problems are run throughout the software development process.
  • The security review process does not slow down the development process for the applications that I work on.

Software Delivery Performance

For the primary application or service you work on...

  • How long does it generally take to restore service when a service incident or a defect that impacts users occurs (e.g., unplanned outage, service impairment)?
    More than six months Between one month and six months Between one week and one month Between one day and one week Less than one day Less than one hour I don't know or not applicable
  • How often does your organization deploy code to production or release it to end users?
    Fewer than once per six months Between once per month and once every six months Between once per week and once per month Between once per day and once per week Between once per hour and once per day On demand (multiple deploys per day) I don't know or not applicable
  • What is your lead time for changes (i.e., how long does it take to go from code committed to code successfully running in production)?
    More than six months Between one month and six months Between one week and one month Between one day and one week Less than one day Less than one hour I don't know or not applicable
  • What percentage of changes to production or released to users result in degraded service (e.g., lead to service impairment or service outage) and subsequently require remediation (e.g., require a hotfix, rollback, fix forward, patch)?
    0%-15% 16%-30% 31%-45% 46%-60% 61%-75% 76%-100% I don't know / NA

Team Experimentation

  • We can make changes to specifications or stories without the permission of people outside the team.
  • We can work on new ideas without the permission of people outside the team.
  • We discuss specifications or stories and make changes to them as part of the development process.
  • We write specifications or stories as part of the development process.

Trunk Based Development

  • All developers on my team push code to trunk/master at least daily.
  • Branches and forks have very short lifetimes (less than a day) before being merged to master.
  • Our application team never has code lock periods when no one can check in code or do pull requests due to merging conflicts.
  • There are fewer than three active branches on the application's code repo.

Version Control

  • Our application code is in a version control system.
  • Our application configurations are in a version control system.
  • Our scripts for automating build and configuration are in a version control system.
  • Our system configurations are in a version control system.

Work In Process (WIP) Limits

  • As a team, we are good at limiting our WIP.
  • Our WIP limits lead to process improvement.
  • Our WIP limits make obstacles to higher flow visible.
  • We strive to limit our WIP, and have processes in place to do so.
  • WIP limits are used as a way to improve our throughput.

Working In Small Batches

  • Our features are decomposed in a way that allows a developer to complete the work in a week or less.
  • Our features are sliced in a way that lend themselves to frequent production releases.
  • Our work is decomposed into features that allow for minimum viable products (MVPs) and rapid development, rather than complex and lengthy processes (an MVP has just enough features to get validated learning about the product & its continued development).
Meet DORA's Research Team
Research archives: