Best of 2018: 11 Popular Open Source DevOps Tools Worth Knowing

As we close out 2018, we at wanted to highlight the five most popular articles of the year. Following is the fourth in our weeklong series of the Best of 2018.

Companies implementing the best practices of DevOps have demonstrated that they are more effective and flexible in implementing as well as designing IT tools and practices, resulting in higher revenue generation at a lower cost. For traditional organizations looking to embrace new inventions such as bitcoin wallet, the adoption of DevOps tools provides consistency, quality and efficiency.

Open source DevOps tools are used as a way to streamline the process of development and deployment. The benefit of using open source software is that it is built with enhanced collaboration which can drive innovation and enhance flexibility in handling transforming markets and needs. Visibility into the code helps enhance overall quality and security, and also helps companies prevent vendor lock-in from proprietary vendors.

If you are looking to accelerate an already existing program or just getting started with DevOps, below are 11 open source DevOps tools worth considering.


Behat is a php framework for auto testing the expectations laid by your business. It is a behavior-driven, open source development framework for php. This tool supports you in providing software that matters via test automation, deliberate discovery, and continuous communication.


Watir is a web application cross-platform open source testing tool. It is the most flexible and reliable tool of Ruby libraries for automating the web browsers. Just like a human being, this tool communicates with the browser so it validates text, fills out forms and clicks links.


Built on top of Kubernetes, Supergiant is an open source platform for container management. It is utilized for Kubernetes deployment on multiple clouds in a matter of minutes. The Supergiant API is used for streamlining production deployment. With the help of the packing algorithm of Supergiant, hardware costs can be lowered and the hardware you require with computed efficiency can only be utilized.


Ansible automates various common tasks related to IT operations such as application deployment, configuration management and cloud provisioning. It is owned by Red Hat. It integrates with numerous other famous tools for DevOps, including Jenkins, JIRA, Git and many others. The free open source version of Ansible is available on GitHub. Red Hat offers three paid versions—premium, standard and self-support—with prices that differ depending on the required level of support and number of nodes in production.


Infrastructure monitoring is an area that has numerous solutions, from Zabbix to Nagios to various other open source tools. In spite of the fact that there are much newer tools in the market today, Nagios is a well-established monitoring solution that is highly efficient due to the large contributor community creating plugins for it. Nagios has the capability to deliver results in different visual reports and representations.


SaltStack is the paid enterprise version of Salt. Salt is a highly flexible, powerful and intelligent open source software for event-driven orchestration, cloud control, configuration automation and remote execution. It helps DevOps companies by orchestrating the effective movement of code into production and keeping the complex infrastructures tuned for optimal application delivery and business service. Saltstack orchestrates the value chain of DevOps and helps to deploy as well as configure dynamic applications.


Chef makes it possible to manage both traditional and cloud environments with one single tool. While maintaining high availability, Chef promises to accelerate cloud adoption. The chef development kit provides you with the tools you need to develop as well as test your infrastructure automation code from your workstation locally prior to deploying changes into production. On the Chef site, many technical resources and a lot of documentation is available including various resources designed to help organizations transition to DevOps and scale their implementations of DevOps.


You can expect portability with Docker, which is transforming IT environments. The portability is made possible via its special containerization technology, quite often found in self-contained units. It packages everything than an application is required to run: libraries, system tools, runtime, etc. Due to this, the applications can function the same way irrespective of their deployment location. A part of Docker known as Docker Engine is a tool responsible for creation and running of Docker containers. Another part of Docker known as the Docker Hub is a service application based on cloud encompassing the concept of application-sharing and workflow automation.


Git has become incredibly popular in the recent years for managing the source code. It has become famous, especially as a site, for hosting open source projects. Because of the ease with which it handles merging and branching, it stands out from other version control management. Many DevOps teams utilize it for managing the source code of their applications. It has great pull request and forking features. It also consists of plugins that can link with Jenkins to facilitate deployment and integration.


Hudson is a tool for managing and monitoring continuous testing and integration. The key features of Hudson include support for various systems for management of source code, application servers, code analysis tools, testing frameworks, build tools, real-time notifications of test failures, change set support and easy process for installation and configuration. A huge library of plugins is present that further extend its capabilities.


No matter where it runs, Puppet promises a standard way of operating and delivering software. Puppet automates deployment to boost auditability, reliability and agility. The products of Puppet provide continuous automation and delivery across the complete software delivery life cycle. The latest version of Puppet features Node Manager and Puppet Apps, which help handle large number of dynamic, variable systems.


The world of DevOps is full of unique and outstanding open source tools. The above mentioned popular DevOps tools can help effectively bridge the gap between development and production environments when compared previously. You can opt for the tool that suits your business needs and can instantly observe the difference in your business operations. And not only do these different DevOps tools function well individually, they also play well together.

If you’re looking to build out your toolset, consider open source DevOps tools in addition to proprietary software.


Best of 2018: 5 Best Node.js Frameworks to Know

As we close out 2018, we at wanted to highlight the five most popular articles of the year. Following is the third in our weeklong series of the Best of 2018.

When it comes to development platforms, the choices are wide and varied. It can be a difficult task choosing the right framework for your app, even for developers. For non-developers, the task can seem insurmountable. That’s why it’s important for app dev companies to educate their clients on the best development platform to fit the needs of their app. For most, it’s a matter of explaining the benefits and drawbacks of the many options.

The Node.js environment, for example, has a number of different frameworks, each suited for a particular purpose. While developers need to know the technical details of each framework, clients really only need to know the factors that will impact their app.

In the professional circles, much is spoken about Node.js and best Node.js frameworks as of the way to facilitate web apps development.

What the client should know about Node.js: this is a special runtime environment used by the developers to create web apps. Its front-end (client-side) and the back-end (server-side) both can be coded using the same language: JavaScript. Previously, those were two different areas to be dealt with different programming languages and by different developers, correspondingly.

What to Know About Node.js

Node.js is especially good for developing real-time apps intended for simultaneous connection of multiple users. However, it is not recommended for apps intended for complex computations.

Some of the big names you know—Netflix, Trello, PayPal, LinkedIn, Walmart, Uber, Groupon, eBay, NASA—all use Node.js for their apps.

Node.js offers myriad benefits to companies, including:

  • JavaScript belongs to one of the most popular programming languages—according to GitHub statistics, as of Q3 2017 JavaScript is the top programming language, with 22.5 percent of pull requests. That means will be no difficulty in finding an experienced developer;
  • Node.js’ popularity implies numerous ready-made solutions and contributions that can be used by developers for their apps;
  • Node.js gives the developers much freedom and space for creating a specific app;
  • Since the front-end and back-end are coded in the same language, only one programmer is needed;
  • Because of these factors mentioned above, development and deployment time is shortened and the app can reach the market in a short time;
  • Less search time, fewer developers and faster development mean lower costs.

Who is behind Node.js?

Node.js appeared in 2009 and now enjoys the support of the active and fast developing community. As of today, there are 8 million instances of Node.js, with more than 1,500 contributors and 39,672 stars on GitHub. Mark R. Hinkle, executive director of the Node.js Foundation, said his organization plans to Grow, Engage and Educate. So, the future app will have good prospects in terms of maintainability and eventual extension.

Express.js (35,310 Stars on GitHub)

The classic of Node.js. appeared in 2010 and served as a basis for a long list of popular Node.js frameworks. It is considered to be minimalistic and simple. Express is not opinionated and offers the developers much freedom, as they can use various modules to build the app. Due to its flexible nature, it is best-suited for large-scale apps that are planned to be customized and extended and need long-term support.

What should the client know about Express.js? Fast, simple, flexible

Sails.js (18,088 Stars on GitHub)

This framework used Express as a basis but is a complete product that can be applied as it is with the available functionality. It is suitable for fast launches. Sails is especially good for developing and setting the real-time apps requiring quick answers and is compatible with any front end.

What should the client know about Sails.js? Complete, quick to develop, good for real-time apps

Meteor (38,755 Stars on GitHub)

This is a full-package popular Node.js framework that allows building real-time scalable web apps. It handles apps for any device (web, iOS, Android). With Meteor, developers can accomplish more job with less code. It suits as a perfect tool to start a project for all major platforms.

What should the client know about Meteor.js? Full-stack, time-saving, universal

Loopback (8,000 Stars on GitHub)

This is a highly extensible modular framework. Loopback is specifically intended for quick and easy creating of API. It allows running the apps on-premises or in the cloud. Loopback works well for writing user’s authentication and authorization.

What should the client know about Loopback.js? Easy and quick to develop as offers much out of box, suits for complex integrations.

Hapi (8,701 Stars on GitHub)

The framework appeared in 2011 and also is based on Express. It has since grown into the separate framework with a different approach: to provide as much as possible use cases out of the box. It is intended for large teams and large projects and will be too complicated for a simple app.

What should the client know about Hapi.js? Configuration over code, reliable, time-saving due to focus on writing reusable app logic, large projects

Bonus Framework: Koa.js (18,495 Stars on GitHub)

This is a new-generation framework. Koa is created as lighter and more flexible version of Express. It suits both for large and small projects and provides for further customization and extension.

What should the client know about Koa.js? Fast development, reliable app foundation, flexible


The list above should serve you in helping your clients decide which framework is best-suited for their project. Be sure to explain each framework in the simplest terms to ensure your customers understand the benefits and drawbacks of each. They will appreciate it.


Flow diagram of Tools used in DevOps – DevOps Process and Tools – Medium

Go to the profile of Venkata Chitturi

DevOps Engineer Will work on all of the tools (choice of tools may vary from environment to environment) as part of day to day job duties.

Servers Provisioning Technologies: Amazon Web Services (AWS), Open stack , VMware,Cloud front, Microsoft Azure , Google Cloud, Digital ocean are most popular Infrastructure providers/technologies used across most of the organization in cloud and on-premise.

Configuration/Deployment Management Tools : Ansible, Chef, Puppet,Salt Stack,u Deploy are the more popular automation tools used to maintain the configuration management and also to deploy the code to different environments (Dev, QA,Pre-Prod, Prod).

Continuous Integration Servers : Jenkins, Hudson, Bamboo, Travis CI are the commonly used CI tools to automate application code test,build,deploy as well as extended to automate the Infrastructure provision and destroying integrated with Server Provisioning Technologies and configuration management Tools.

Artifactory management Tools : In any enterprise/local environment we will store the versioning of executable file of our product in the form artifacts for our easiness. Nexus and Artifactory are the most popular and widely used artifactory management tools to store the executable artifacts across the organization.

Source Code Version Management Tools : GitHub, Git lab,Subversion, Perforce, CVS and Bit Bucket are the most popular tools in this segment.The code developed by developers/engineers will has to be collaborated and to be saved at a single place for daily and future developments activities and also it is very important to maintain the versioning of the code for easy maintenance/ reference purpose.

Build Tools : Maven, Ant,Gulp,Gradle are tools to build your source code that will be understandable by JVM for Java projects. Any source has to be build and compile so that it should be understandable to a machine language form.


DevOps Tools for Full-Stack Cloud Monitoring


User Experience

To deliver and maintain a high-quality application or service requires understanding how the user is experiencing your system. If your application is also your store front or directly influences your bottom line, it is even more crucial. You need to know your system is available, accessible, performing well, and critical site flows are working correctly. For DevOps where quality is a primary objective, monitoring application performance from a user’s perspective enables faster identification and resolution of issues—especially when the inevitable workload latency or regional server disruption occurs.



Learn MoreRequest DemoSTART FREE TRIAL


Application and Infrastructure Metrics

To prevent outages, you need to spot issues early before there is a service disruption. Using DevOps tools for early identification allows you to proactively respond and avert an outage all together. Metrics of resource usage or response time can provide you with an early warning of potential problems. Metrics are also essential to performance tuning, allowing you to compare current performance to past reference points. Using metrics, you can create a single pane of glass for a unified, real-time view of system performance and usage trends, and set thresholds to generate automated alerts when key performance indicators are not being met.





Distributed Traces

Cloud applications may be dependent on hundreds of services or systems, and a single request may traverse many services before being fulfilled. Manually troubleshooting issues across all these dependencies and services can be very difficult. Tracing simplifies this by walking the path of an entire request to identify bottlenecks or broken flows. Using distributed tracing, you can visualize end-to-end behavior of individual requests in real time, drill down into requests states, and compare trace timelines to debug performance issues. With distributed tracing, you can quickly and reliably understand the root cause of application issues.





Log Analysis and Management

Logs provide the details, or the evidence, of what has happened with an application or system. But when there is an issue, finding the right log and combing through all the events can be extremely time-consuming. Aggregating and managing logs so that you can quickly find and search through the relevant event messages is essential to streamlining troubleshooting of applications and system issues. Using DevOps tools to monitor logs and event counts can also help you spot problems early and get them remediated before there is an outage.


Loggly – for Powerful Log Analytics

Papertrail — for Real-time Troubleshooting


Scrum And DevOps –

Scrum and DevOps

Hello great people of the world. Welcome back to Professional Software Delivery with Scrum (PSD) blog series with yours truly. This time we’re going talk about how to use Scrum And DevOps. I am interested to discuss this topic because it’s quite common I get a question from someone in the agile community, “Should I use Scrum or DevOps?”. This question deserves an attention because of the motivation to choose DevOps over Scrum. People want to go with DevOps over Scrum because they want to be agile but they can’t change their organisation. Many people in the agile community whom I’ve interacted with think that DevOps is only about toolings for continuous delivery. We’re going to learn why DevOps is not only about tooling for the delivery pipeline. Unlike my previous article about Scrum and eXtreme Programming, this time we’re going to learn how to integrate Scrum And DevOps starting from DevOps lense.

Scrum is just a framework

Scrum is just a simple framework for complex product development that is based on values and principles. Scrum is not a prescriptive methodology that tells you how your process should look like. Scrum is highly focused on what is happening during the Sprint. Scrum will not tell you how your process within the Sprint should look like. Scrum is an additive framework, what that means is Scrum will only tell you the minimum sets of what you need to have so that you can claim you are using Scrum. The same as when you need to install a software on your computer, there is a minimum requirement for the computer specification. But it is not illegal to install the software on a computer that is above the minimum specification. So with this premise, it is not illegal to add practices that will enhance the flow of your value delivery within the Scrum Framework.

Lean Thinking, Systems Thinking and Value Stream Mapping

DevOps starts from Systems Thinking and view the whole value stream in the system rather than only zooming into the development phase. What that means is, how the work gets into the development (the upstream) and how the works get delivered to customers (the downstream) is also a concern in DevOps. Systems Thinking views how every interconnected element in the system affects one another. In a complex system like a corporate, elements do not work in isolation. Making one change in an element is going to impact another element in the system.

A value stream is how processing customer requests into a tangible outcome flow from one element to another element in the whole system. Whenever there is a request, there is a value stream in the system.

Besides Systems Thinking, DevOps is also based on Lean Thinking. Lean Thinking is about reducing waste in the value stream. Any activities that are non-value added can be considered as waste. I am not going to elaborate Lean and types of waste in this article.

Lean Thinking, Systems Thinking and mapping the whole value stream is important and works nicely with Scrum because Scrum is based on Lean Thinking. This is what I do before starting Scrum in a large corporate, view the whole system holistically and map the whole value stream in the system first rather than only Scrumming the Information Technology department. Kanban is a good tool to visualise the whole value stream in the system.

When we’re using Scrum And DevOps, all of the activities in the value stream, from customer request to releasing the product to production environment or to customers happens within the Sprint. This does not mean a Sprint is a mini-waterfall where deployment only happens at the end of the Sprint or all of the analysis happens at the beginning of the Sprint. Using Scrum with Kanban helps to get out of using Sprint as mini-waterfall and move towards single piece flow-based model in the Sprint. In this article, I am not going to talk about Scrum and Kanban.

Scrum Team applying DevOps: The Composition

Scrum Teams adopting DevOps will have a different way of working to Scrum Team who are not adopting DevOps. Not only their way of working is different, the team composition also look very different.

The Development Team consists of professionals who do the work of delivering a potentially releasable Increment of “Done” product at the end of each Sprint.
– Scrum Guide

Scrum says that the Development Team consists of professionals who deliver the potentially releasable increments at the end of the Sprint. As DevOps views the whole value stream and uses Systems Thinking, the professionals in Scrum Team adopting DevOps are everyone who processes the Product Backlog Item (PBI) in the whole value stream from end-to-end. Many people see the Development Team only consists of developers that is why many come to think that Scrum is for development phase only.

In a Scrum Team adopting DevOps, the team composition includes everyone, but not limited to, marketing people, analysts, UI/UX designers, developers, operations people, sysadmins, data scientists and site-reliability engineers. They all work together collaboratively as one unit to deliver value to their customers.

DevOps Three Ways

The DevOps Three Ways are the set of underpinning principles that make up DevOps. The DevOps Three Ways is based on Lean Thinking and Systems Thinking. DevOps Three Ways work with Scrum as the Three Ways is not about specific tools and practices, that are often more emphasised during any discussion about DevOps in the communities. Scrum Teams adopting DevOps Three Ways will have a different way of working with Scrum Teams who are not adopting it. None of the DevOps Three Ways are in conflict with the Scrum Framework and Scrum Values.

Disclaimer: The practices I elaborate here are not a complete list of practices to implement the DevOps Three Ways. I will be only elaborating some practices to meet the DevOps Three Ways that are already commonly practiced in the communities. I believe in the future there will be more practices to meet the DevOps Three Ways that is not listed here.

1. Optimise Flow

The First Way in the DevOps Three Ways is Optimise Flow. In DevOps, we are concerned about optimising the flow of single Product Backlog Item since the first time customer requested it until the customer get the requested PBI in forms of tangible item (i.e working feature) in the production environment. Anything that gets in the way to make the PBIs flows smoothly in the value stream may be a bottleneck that should be removed. Many people in the agile communities believe that flow contradicts with Scrum’s Sprint because the premise is:

  1. You plan for the whole PBI need to be completed for the Sprint during the Sprint Planning. The Sprint is a commitment.
  2. You can only deliver to production once after the Sprint Review.

This is Sprint as mini-waterfall model. Eventhough there is nothing wrong with this, great Scrum Teams should move away from this model as they are improving their way of working. We will see why the Sprint works with flow-based model.

In Scrum, the Scrum Master is the role responsible to remove anything that impedes the flow of value delivery to the customers. When the Scrum Team decides to adopt flow-based model, the Scrum Master need to learn about flow-based model and coaches the whole Scrum Team on how Scrum works with flow-based model.

1. Sprint Planning

… enough work is planned during Sprint Planning for the Development Team to forecast what it believes it can do in the upcoming Sprint. Work planned for the first days of the Sprint by the Development Team is decomposed by the end of this meeting, often to units of one day or less.
– Scrum Guide

Scrum Team adopting flow-based model will have a different nature of Sprint Planning. Nothing in Scrum Guide states that the Scrum Team fixed the plan for the Sprint during Sprint Planning. The Sprint Planning is more focused and committed with the Sprint Goal rather than the Sprint Backlog. The Sprint Planning is an opportunity to get everyone’s heads down to look at the same goal. The purpose of the Sprint Planning is to calibrate the next Sprint development with a single goal. During Sprint Planning enough work is forecasted for the very few days of a Sprint. More work may emerge later on during the Sprint as long as it does not endanger the Sprint Goal. Having Kanban systems helps the team to manage the work that emerges later during the Sprint.

2. Continuous Delivery to Production environment

The heart of Scrum is a Sprint, a time-box of one month or less during which a “Done”, useable, and potentially releasable product Increment is created.
– Scrum Guide

Nothing in Scrum that says you can only deliver to production environment after Sprint Review. The Sprint Review is not a phase gate but an opportunity to get feedback about what you have developed and the Sprint Retrospectives is an opportunity to improve the process how you develop the product. Even in a flow-based model, you need to know whether you have flown to the right direction. Consider the end of the Sprint works like a Global Positioning System (GPS) to track where you are at the moment. The Sprint Review and the Sprint Retrospectives does not and should not block flow but on the other hand it should enhances flow.

One thing that I would like to highlight here is, the Sprint is not a release cadence but a planning & review cadence. At a minimum, you need to have a “potentially” releasable product increment at the end of the Sprint. In a previous article, we’ve learned that Scrum Team adopting eXtreme Programming (XP) delivers to production every night. So there is nothing wrong with releasing the product to production environment more than once in a Sprint. If the Scrum team is able to deliver frequently more than once in a Sprint, that means they’ve gone beyond the minimum standards that Scrum requires. We should celebrate it because not many teams are able or allowed to release to production environment multiple times in a Sprint or even multiple times a day.

3. Flow Based Daily Scrum

The structure of the meeting is set by the Development Team and can be conducted in different ways if it focuses on progress toward the Sprint Goal.

Scrum Team adopting flow-based model will use the Daily Scrum to inspect the flow of the PBI towards the Sprint Goal. The Daily Scrum becomes a tool for the Scrum Team to optimise flow. Some Scrum Teams may do it more than once a day because they know that Daily Scrum is not about reporting but about synchronising each other’s plan. Some Scrum Teams will look at their Kanban system during Daily Scrum to optimise flow. During Daily Scrum, new Sprint Backlog may be added as the team learn more about their progress. The Scrum Master will work to remove anything that is impeding flow that is discovered during Daily Scrum.

4. Definition of Done

Scrum Teams adopting DevOps will have a more ambitious Definition of Done compared to Scrum Teams who are not adopting DevOps because they have an ambition to deliver the product to production environment more than once a Sprint sometimes up to multiple times a day. If their Definition of Done is not ambitious, they may not meet their ambition. Some practices they may have in their Definition of Done that will optimise flow are:

  • Infrastructure as Code so that the team has a production-like environment at every stage in the value stream.
  • Automated Testing that will improve flow from Unit Testing, Integration Testing to Acceptance Testing with supporting practices such as: Test Driven Development, Behaviour Driven Development, Acceptance Test Driven Development
  • Continuous Integration.
  • Continuous and Automated Deployment.
  • Decouple deployments from release using techniques like canary release or feature toggling.
  • Architect for low-risk releases like using Microservices.

Some of the practices listed above are already implemented by Scrum Teams who are implementing eXtreme Programming (XP). The list above is not necessarily a complete list of practices to optimise flow in the value stream as I believe there will be more practices that may be discovered by the community in the future.

2. Amplify Feedback

The Second Way in the DevOps Three Ways is Amplify Feedback. Scrum is all about feedback. At its core, Scrum has the Sprint and the Daily Scrum as a built-in feedback loops. Scrum Team adopting DevOps will have a different nature of implementing the feedback loops. Scrum Team implementing eXtreme Programming has multiple feedback loops with pair programming and Test Driven Development as the smallest feedback loops.

1. Sprint Review

Scrum Team adopting DevOps will have a different way of running the Sprint Review because the product is already in the production environment before the Sprint Review is held. So instead of giving feedback about the product increment, the whole Scrum Team along with the business will look at a single dashboard of metrics that covers business level metrics, application level metrics, infrastructure level metrics and deployment pipeline metrics. By looking at a holistic view instead of isolated view metrics, the whole business and the Scrum Team is able to give a sound judgment about the performance of the product in the market and collaborating on creating a strategy they should apply in the next Sprint to optimise the value of the product.

2. Hypothesis-Driven Development and A/B Testing

Scrum Team adopting DevOps will implement Hypothesis-Driven Development. For this kind of Scrum Team, “Done” does not stop until feature complete only because they believe completed features does not mean the product is successful in the market. For this kind of Scrum Team, “Done” is when the feature gained traction in the market.The whole Scrum Team works together to ensure that every feature delivered is valuable for customers.

3. You ship it, you manage it

Developers in a Scrum Team adopting DevOps has a higher responsibility and need to learn about maintaining the application live and stable in the production environment. To amplify feedback and reduce wasteful activities, developers in a Scrum Team adopting DevOps get to maintain the application they develop in production environment. The mantra is, you ship it, you get to manage it. In some organisation this goes as far as giving time-shifts to developers to attend to production incidents and outages support call. Not only in DevOps developers are given higher responsibility, the management also needs to give higher trust to empower the developers to go above and beyond.

4. Pair programming and code review

From my previous article, we have learned that Scrum Team implementing eXtreme Programming religiously do pair programming. We have learned that pair programming is not about two programmers doing programming together on one computer but more about live feedback on how the code will actually work in production. Scrum Team adopting DevOps will utilise pair programming and code review to amplify feedback.

5. Daily Scrum as a feedback loop

Daily Scrum is all about feedback. Daily Scrum is not about a status report. In a complex unpredictable environment, feedback is very important. Having feedback loops nested within another feedback loop is essential in a complex environment. The feedback in Daily Scrum will amplify the feedback received at the end of the Sprint. The Scrum Team will learn how they are progressing towards the Sprint Goal.

The list above is not necessarily a complete list of practices to amplify feedback as I believe there will be more practices that may be discovered by the community in the future.

3. Maximise Learning and Experimentation

The Third Way in the DevOps Three Ways is Maximise Learning and Experimentation. The heart of Scrum is about continuous learning because Scrum is based on empiricism. Empiricism asserts that knowledge comes from experience and making decisions based on what is known (Scrum Guide). The purpose of having Sprints is to maximise learnings and to improve how Scrum Teams will operate and deliver value in the next Sprint. Sadly, many organisations without a clear understanding of Scrum values and principles will treat the Sprint as a mini-waterfall and will fix the scope for the Sprint without any rooms for the Scrum Team to innovate or to learn something new. The Scrum Master is the role who is responsible to ensure that the organisation has a culture of learnings.

1. Psychologically-safe and blameless environment

Great Scrum Master will focus and will invest more time in injecting the Scrum Core Values into the Scrum Team and the whole organisation rather than just focusing on the Scrum mechanics. They know that the Scrum core values will contribute to creating a psychologically-safe and a blameless environment.

In DevOps, everyone works together to ensure that the product is valuable and meets the business goal. In a psychologically-safe and blameless environment, there should be no politics, personal agenda, and silos. Everyone in the value stream looks at the same holistic product metrics regardless of their role. A blameless culture is important so that everyone in the value stream is collaborative and not just throwing work over each one’s shoulder. Any incentives that make everyone only cares about their metrics should also be removed.

The Third Way is more about organisation refactoring practice than it is about technical practice. The Scrum Master is the role responsible to coach the management and the human resource department on changing the incentives system that blocks collaboration and only foster politics and bureaucracy.

2. On-demand retrospectives

The Scrum Master encourages the Scrum Team to improve, within the Scrum process framework, its development process and practices to make it more effective and enjoyable for the next Sprint.
– Scrum Guide

One misconception about Scrum in the communities is that retrospectives or learnings only happen at the end of the Sprint. In a Scrum Team adopting flow-based model, learning needs to happen just-in-time and more frequent. As we’ve learned earlier, Scrum is an additive process. Everything in Scrum is a minimum set requirements. Sprint Retrospectives provides a focused time to reflect back and to learn about what is happening in the Sprint. But that does not mean Scrum does not allow you to learn multiple times or to have additional retrospectives in a Sprint. If the organisation becomes a learning organisation and want to have more learnings that the ones in retrospectives than we should celebrate it because many organisations stopped growing when they stopped learning. Many Scrum Teams celebrate learnings during the Daily Scrum as they see Daily Scrum is about inspecting and adapting.

3. Slack time for improvements

To ensure continuous improvement, it includes at least one high priority process improvement identified in the previous Retrospective meeting.
– Scrum Guide

Many managers or organisations still see Scrum’s Sprint as a mini-waterfall (or sometimes mini-deadlines) and only about delivering features. People working in these organisations come to think that Scrum does not provide time for improvements. This is not true. In fact, Scrum Guide states that at least one item in the Sprint Backlog must contain improvements identified during the previous retrospectives. Scrum Team adopting DevOps will go further by deliberately providing slack time to innovate as much as 20% time (or more) to improve the value of delivery in the value stream.

The list above is not necessarily a complete list of practices to maximise learnings and experimentation as I believe there will be more practices that may be discovered by the community in the future.

DevOps is more about Organisation Culture than about Tools

As you can see, DevOps is not about tools and automation in the delivery pipeline. In fact, as we have learned tools and automation is only one-third of DevOps (I would say it is even less). In overall, DevOps is about Collaboration & Collective Ownership, Focus on the flow of value delivery and Learning and experimentation culture. But sadly, many tooling vendors position DevOps as tools and process for delivery pipeline (the vendors that I’ve witnessed in my market are more focused on tools but your experience may be different to mine). This will get the management excited because many managers whom I’ve met think that after buying and installing the “DevOps” tools without changing their organisation will make their company instantly agile. This is like putting the cart in front of the horse.

From this article, we’ve seen that Scrum and DevOps actually share more in common than most realise. Just like how Scrum is not about tools and process, the DevOps Three Ways is also about values and principles.

Hopefully, this article along with the visual helps you understand how Scrum and DevOps work together and how they do not contradict each other.


SRE vs. DevOps:  SRE Is to DevOps What Scrum Is to Agile

DevOps and Site Reliability Engineering (SRE) both seem to rule the world of software development, and at the same time, both appear to overlap or confuse people to some extent. Today, we will try to analyze both terms and see if we can see some differentiating factors between the two.

DevOps Engineer

The term “DevOps Engineer” strives to dim this divide between Dev and Ops conjointly and suggests that the best approach is to hire engineers who can be excellent coders as well as handle all the Ops functions.

Skills required for a DevOps Engineer:

  • Knowledge and proficiency with a variety of Ops and automation tools
  • Great at writing scripts
  • Comfortable dealing with frequent testing and incremental releases
  • Understanding of Ops challenges and how they can be addressed during design and development
  • Soft skills for better collaboration across the team

You can also read this well-described article on how to be a great DevOps engineer.

Site Reliability Engineer (SRE)

According to Wikipedia, “Site Reliability Engineering is a discipline that fuses aspects of software engineering and applies that to IT operations problems. The main goals are to create ultra-scalable and highly reliable software systems.”

Skills required for an SRE:

  • Ability to postmortem unexpected incidents to solve future hazards
  • Skilled in evaluating new possibilities and capacity planning aptitudes
  • Comfortable with handling the operations, monitoring and alerting
  • Knowledge and experience in building processes and automation to support other teams
  • Ability to persuade organizations to do what needs to be done

The Difference Between DevOps and SRE

  1. DevOps is not a role, it is more of a cultural aspect and can’t be assigned to a person, should be done as a team. However, to do DevOps, we need some tools. Whereas, SRE is the practice of creating and maintaining a highly available service and it is a role given to a software professional.
  2. SREs sometimes practice DevOps. “DevOps engineer” sometimes is really just a title used to hire sysadmins. While DevOps, as considered in the organizations focuses more on the automation part, SREs focus is more on the aspects like system availability, observability, and scale considerations.
  3. Ali Fay, a DevOps expert says, “A ‘DevOps Engineer’ is someone who not only understands the full SDLC (Software Development Life Cycle) but has the hands-on skills actually to implement changes to tooling for supporting the improved processes. Usually, those skills are honed from years of experience as a sysadmin and/or developer, allowing them to implement services using good quality code. Whereas SREs main job is ensuring the site (aka “platform/service”) is always operational, no matter what.”
  4. When asked about the difference between SRE and DevOps, Shaun Norris, the global head of cloud infrastructure services at Standard Chartered Bank says, “I like to think that SRE is to DevOps what Scrum is to Agile; one implementation of a philosophy. Not a 100% subset (SRE doesn’t subscribe to the full ‘run what you build’ mantra) but you get the idea…”.
  5. DevOps primarily focuses on empowering developers to build and manage service and give them measurable metrics to prioritize tasks. There seem to be very fewer people in this segment who can handle a senior DevOps role since It should be someone with a combination of a software engineer, system engineer, architect, and an experienced master. SRE deals with monitoring applications or services after deployment to practice where automation is crucial to improving a system’s health and availability. She or he considers the role after the design work of a software developer.

DevOps and SRE can still be confusing at some level but it all depends on the company and your job profile interpretation. The roles and names might vary but the only thing that remains with you is your skills. End of the day, the whole world needs a solution and technology becoming more and more dynamic and enriching day by day, experience and learning matter more than anything else.


Jenkins 2.0 – A New DevOps Engineer On Your Team

Jenkins is a free and open source build server written in Java which is widely used by individuals, small companies and large enterprises for setting up Continuous Integration and Continuous Delivery processes since February 2011 (and even earlier given its Hudson ancestor).

Due to being free and open source, and having an unparalleled plugin ecosystem (there are more than 1000 plugins for Jenkins available) that provides extreme flexibility and support of numerous version control systems, build types, build and post-build steps, Jenkins has become a de-facto standard for automating build, testing, packaging and installation jobs.

Jenkins Continuous Integration

A New Approach For Shipping Plugins

For the last 5 years, Jenkins had a key challenge, in that it was more of a “skeleton” – not very usable out-of-the-box, with minimal integration capabilities. Jenkins users had to figure out which plugins were required in order to implement their current tasks, find the relevant ones and install them.

How to Setup Jenkins 2.0

This “do-it-yourself” ideology has been reviewed and updated. Jenkins 2.0, released in April, can be installed with some bundled plugins which should be enough to cover the majority of Continuous Integration and Continuous Delivery tasks for almost any software project.

The dialog providing the choice whether to proceed with the default set of plugins or to choose your own is now displayed during the Jenkins installation.

Continuous Delivery Pipeline

The IT world changes quickly and during the last few years, new software development methodologies appeared and increasingly were adopted. This resulted in Jenkins “Freestyle” projects not fully covering everyone’s needs anymore as the number of automated tasks which normally happens between the commit and publishing the new version to production has significantly increased.

That’s why the new Pipeline feature has been added as the default plugin. It enables you to build Jenkins jobs in the form of simple text scripts (in fact it’s custom DSL on top of Groovy language). Instead of defining the steps in the Jenkins UI, you can orchestrate your Continuous Integration processes from commit to delivery using powerful pipeline scripts – which are version-control-systems friendly and human readable – and then track their progress and status directly in the Jenkins job dashboard.

Interesting more about integrating performance tests into the new Jenkins Pipeline feature? View our webcast Efficient Performance Test Automation – Optimizing the Jenkins Pipeline.

Modern Look And Feel

Some Jenkins elements were redesigned to provide a better user experience and make configuration and usage more intuitive and user friendly, including (but not limited to):

1. Installation dialogs – i.e. “Customize Jenkins” page where you can select which plugins to install

2. “New Item” dialog: Jenkins 1.x on the left, Jenkins 2.0 – on the right

3. Tabbed interface on the “Job Configuration” page

4. Improvements of “View”, “Agent” and other dialogs

5. The Jenkins website was improved as well, especially the plugins index section

Backwards Compatibility

Jenkins 2.0 is fully backwards compatible with Jenkins 1.x configuration and plugins. All you need to do is to replace the jenkins.war file on your application server with the new one and start enjoying the new Jenkins version.

If you’re running Jenkins 2.0 for the first time, you will need to provide the administrator password during installation. The password is being printed to stdout and stored into the /secrets/initialAdminPassword file under the Jenkins home folder.

If you will be logged out, for instance due to inactivity or after Jenkins restart, use admin as the username.

It’s easy to kick off a performance test at any stage of Jenkins job.

  • A JMeter test can be launched in multiple ways:
    • Batch file
    • Ant Task
    • Maven Plugin
    • Triggered from Java code
  • BlazeMeter provides a REST API – JSON over HTTP, so there are multiple ways to execute API calls, including:

However, if you would like to kick off BlazeMeter tests directly as a Jenkins Build Step, see the performance test results right in the Jenkins dashboard, and be able to fail the build in case of the response time exceeding certain thresholds or performance degradation introduced by a recent commit – it’s better to use the corresponding Jenkins plugins:

If you had these plugins previously – you don’t need to take any action – the plugins will be present and working after upgrade to Jenkins 2.0.

If you starting with a fresh Jenkins 2.0 installation, you can install the above plugins via the Plugin Manager:

Manage Jenkins -> Manage Plugins -> Available tab -> type the plugin name in the Filter input.

Watch our on-demand webcast on How to Build Testing Into Your CI Pipeline, featuring a special guest DevOps Engineer from MIT.

Interesting more about integrating performance tests into the new Jenkins Pipeline feature? View our webcast Efficient Performance Test Automation – Optimizing the Jenkins Pipeline.

You might also be interested in checking out the following articles for detailed Performance and BlazeMeter plugins installation and usage instructions.

Continuous Integration 101: How to Run JMeter With Jenkins

BlazeMeter’s Jenkins Plugin

If you have any questions, feel free to leave them in the comments section below.


The Limiting Factors of Microservices Also Apply to FaaS

Limiting Factors of Microservices

In enterprise development and deployment, the pattern is easy to detect. A big monolithic application is targeted for re-architecting as a microservices architecture. People excited about microservices get together and break up the design to make it far less monolithic and far more loosely coupled.

If your organization is lucky, it is at this stage that those responsible realize that breaking a monolithic application down too far is bad for complexity, performance, monitoring and maintenance—pretty much everything re-architecting is supposed to fix.

Far too many organizations don’t realize this until the implementation phase, though. First, a metric ton of little tiny microservices are deployed, then the organization realizes that functionality such as database access can be applied much more efficiently through one larger service that handles all interaction with the database for the app, rather than through five or 10 smaller services that replicate code and end up calling each other anyway.

Function as a service (FaaS) has much the same issue. It is an astounding solution for some uses, but in the end, users must consider what access the function requires to other parts of the system and what task is being achieved. The early uses of FaaS are well-documented; longer running independent jobs such as image or video processing were a bit early hit, for example, particularly for services that have highly varying traffic rates. But other uses, such as handling bottlenecks in traffic spikes (hello holiday shopping season!) or out-of-band operations processing, are less documented at this point. Entire applications have been designed from the ground up in serverless, but they are relatively few in terms of overall applications out there.

FaaS is highly suited to DevOps, because upgrading the function is as simple as redeploying code. There are no concerns about OS or library version, because those are largely beyond your control. But that is as big a negative as it is a positive. There is no flexibility in OS version or library inclusion because those are beyond your control—for the most part, anyway.

As with the vast majority of new technologies, expect adoption of serverless to grow and the hype to grow more quickly. It is definitely in the market where we have seen massive shifts—virtualization and then containers really did live up to the hype in terms of usage—but it needs to have a driving reason for adoption. Look closely at what your application and its DevOps toolchain are doing before hopping into the deep end. For some uses, FaaS is the fastest/easiest/lowest-maintenance solution. But that does not make it the Swiss army knife that virtualization and containers have proven to be. Indeed, if you think persistent storage is painful under containers, try it in a FaaS environment—as just one example.

FaaS is useful in many scenarios, but just remember where/when/why you are adding FaaS into your DevOps processes, and take advantage of on-demand processing with few scaling limitations while avoiding seeing everything as a FaaS problem because FaaS is the FaD.


CloudBees: 2019 to Be Year of DevOps Metrics

While DevOps has come a long way in terms of understanding and adoption, 2019 is going to be marked by a concerted effort to applying metrics to DevOps processes. Applying those metrics will not only substantiate the value of adopting DevOps, they also will provide the impetus many organizations need to close a variety of DevOps gaps that have emerged in recent years.

For example, a recent survey of 1,076 IT professionals conducted by CloudBees, provider of the open source Jenkins continuous integration/continuous development platform, suggests that while a lot of advances have been made in terms of developers embracing continuous integration, there remains a lot of work to be done before organizations fully embrace continuous development. The survey finds 67 percent of respondents say they are practicing DevOps, but only 50 percent have implemented continuous deployment.

Brian Dawson, DevOps evangelist for CloudBees, said one of the major reasons that gap exists is that getting developers to embrace continuous integration is an easier challenge than getting disparate developer and IT operations teams to jointly embrace continuous development.

Closing that gap will require more organizations to measure precisely the benefits they are achieving by making the transition to DevOps, sometimes referred to as value stream management, said Dawson. Once those metrics are collected, he said, it should become easier to drive DevOps processes deeper into most organizations.

The challenge, however, is that most organizations are not especially disciplined when it comes to monitoring metrics. In fact, according to the CloudBees survey, 34 percent of respondents admit they don’t gather any metrics from their DevOps pipelines.

In the meantime, Dawson noted the rise of containers and Kubernetes soon may force more widespread adoption of DevOps, as microservices based on containers are frequently updated. Kubernetes may soon provide a common set of definitions from which developers and IT operations teams can create a unified set of processes. A full 79 percent of respondents report they are using Docker and 47 percent employ Kubernetes. Additionally, 38 percent of survey respondents are using containers in both development and test activities, compared to 33 percent using containers from development all the way through to production. CloudBees is trying to drive adoption of Jenkins X, a CI/CD platform that is based on a microservices architecture enabled by containers and Kubernetes.

Of course, the DevOps gap is hardly limited to definitions. The CloudBees survey also exposes communications gaps that need to be bridged in 2019. For instance, according to the survey, C-level executives and developers have a generally lower opinion than IT operations teams and senior managers concerning the degrees to which DevOps practices are being adopted.

Dawson said greater focus on DevOps metrics will go a long way to closing DevOps gaps that are becoming more apparent with each passing day. DevOps is relatively easy to subscribe to as a philosophy. It’s only when organizations begin to implement CI/CD systems in a way that can be measured objectively does the commitment to that philosophy get tested.


The DevOps practices you need to implement in 2019

DevOps in 2019

Over the past few years, DevOps has evolved from a vague technical term to a revolutionary way-of-working for modern organisations.

That certainly hasn’t changed in 2018. According to research from DORA and Google Cloud, DevOps is continuing to increase productivity, market share, customer satisfaction, and technical excellence for companies globally.

But what will 2019 bring for DevOps? Having spent the last year travelling the globe to attend and speak at leading conferences, our team have been given a unique insight into the latest industry trends. During his time, they’ve had an opportunity to network and exchange ideas with key industry influencers.

In our most recent webinar, DevOps Product Owner Edward Pearson, Principal DevOps Consultant Raj Fowler, and Senior DevOps Transformation Consultant Graham Smith reflected on the things they’ve learnt in 2018 and the DevOps practices that organisations need to implement in the year ahead. Here’s our summary.

Increased adoption

Raj Fowler, Principal DevOps Consultant, noted how more companies and teams are becoming aware of the power of DevOps. He believes that this trend will continue in 2019, with more organisations looking to disrupt the industries in which they operate and deliver repeatable value to customers.

“For the past five months, I’ve attended lots of webinars, events and conferences – and also met with a number of clients to understand what they’re trying to do and where they’re trying to go. The concepts, principles and practices of DevOps, in particular those moving towards products over projects, in fairly sizeable enterprises are more mainstream than I thought,” he said.

In 2019, Raj expects the increasing adoption of DevOps to result in smaller work batches; happier employees and customers; more competition for clients and staff; products over projects; more Agile and Lean practices; more learning; higher-throughput, lower lead times, and less burnout; and widespread disruption.

He added: “From Disney to the BBC, lots of organisations out there are heading towards a DevOps way-of-working. This is no longer a set of tools, competencies, and cultural norms specific to disruptive tech firms like Facebook, Netflix, and Spotify. This time next year, I think we’ll be talking more about how people are scaling these practices across modern enterprises.”

Trading in organisational currency

While a big part of DevOps is responding to change, it can be understandably daunting. People often fear the unknown and what it means for them. To understand how teams will react to change, DevOps Product Manager Ed Pearson said organisations should first understand the things they value.

“What organisations will need to be aware of over the next twelve months is how change and transformation affect their teams. For me, understanding the people in your organisation is the first part of the equation, and the second one is helping them understand how they’re valued. One of the phrases we’ve been using at DevOpsGroup to describe this is organisational currency,” he explained.

At DevOps Days London, Ed spoke with a number of people helping to drive change initiatives within organisations. One of the things that became clear was how many organisations value, or are seen to value, the wrong thing in their people. A classic example would be technical specialists whose organisational currency is based on a specific technology such as Exchange, VMWare, or even a programming language such as COBOL.

Although the business may be adopting new technologies like cloud computing and microservices, these professionals aren’t less valuable. It’s often the case that they’re long-serving employees with a deep understanding of the company and market.

Ed continued: “The conversation now isn’t about technology, but it’s about how this person understands the business logic and understands the nuances of what it takes to deliver these applications throughout the business better than anybody else. And because they’ve been there and done it, they know what it means to build these applications and integrate them.”

Going into 2019, Ed wants to see organisations exploring what DevOps transformation truly means for them and how their teams can help to accelerate this. He said: “The question will be: How do we take traditional sysadmins and equip them to work in modern cloud platforms like AWS and Azure?” Re-skilling is, therefore, paramount. Agreeing with Ed, Graham commented: “Getting into the cloud is a great thing for enhancing skills and taking you on that next step of the learning journey.”

We’ll have a tool for that

Citing the Periodic Table of DevOps from XebiaLabs, Senior DevOps Transformation Consultant Graham emphasised how the range of DevOps tooling is quickly expanding. And he expects it to grow even more in 2019. “Tools are not the answer to DevOps, but they’re still important. Anyone involved in DevOps will have noticed that we’re experiencing what you might call a Cambrian explosion of tooling. In every category, there’s scores of tool options available,” he said.

“My prediction is that if you’ve got a problem, there’s almost certainly going to be a tool which will exist to solve it. And I’d definitely consider adopting a tool before building something in-house. I think it pays us well to remember what Jeffrey Snover said at DevOps London: Build what differentiates you and buy what doesn’t.”

Of course, being able to purchase tools can be constrained by budgets and other resourcing factors. To find cost-effective solutions that work for your organisation, Graham recommends checking out the different DevOps tooling categories and particularly open-source options.

Aside from cost, Graham explored whether there are other organisational barriers in terms of tooling. One of the main findings in the 2018 Accelerate State of DevOps report is that teams should be allowed to choose their own tools, but Graham asked if that’s always going to be the case. His view is that this is fine for large companies like Google, Amazon, and Facebook, although it may not always be a viable option for SMEs.

Ed interjected, saying organisations could end up with an explosion of tools that just aren’t sustainable. “It may be that you’re not paying licensing to get it installed, but every tool you put in has an overhead for the organisation – whether it’s the infrastructure you run it on, training people to use it, or integrating it with other tooling.” The key, then, is working out what tools you actually need.

As 2018 ends, it’s clear that there have been some exciting developments for DevOps. Whether it’s finding new ways to get the most out of DevOps, taking your teams on the journey, or exploiting new tools, we have no doubt that 2019 will be just as exciting for the sector.