SmartBear Named a Leader in Gartner Magic Quadrant for Software Test Automation

SmartBear Recognized as the Vendor Positioned Highest on the Ability to Execute Axis in the Magic Quadrant

SOMERVILLE, Mass. – Nov. 29, 2018 – SmartBear, the innovator behind the industry’s highest impact tools to build, test, and monitor great software, was named a Leader in the 2018 Gartner Magic Quadrant for Software Test Automation.[1] Gartner, the world’s leading information technology research and advisory company, recognized SmartBear for the fourth consecutive year, and its first year as a Leader. The company believes this position is recognition of the commitment SmartBear has made to accelerate delivery by enabling continuous testing.

The criteria used by Gartner to evaluate companies in the Magic Quadrant include completeness of vision and ability to execute. This Magic Quadrant examined 11 software test automation vendors across a range of criteria and positioned SmartBear as a Leader.

“We’re proud to be recognized by Gartner as a Leader in the Magic Quadrant,” said Christian Wright, Chief Product Officer and Executive GM at SmartBear. “We believe our Leader position reaffirms our vision of enabling quality throughout the software development lifecycle through a comprehensive test automation portfolio. We see Gartner’s recognition reflects the excellent feedback we receive from our vast customer base and our teams’ ability to deliver open, collaborative, and easy-to-use tools to our users.”

SmartBear has continued to push continuous testing across the entire software delivery lifecycle, providing a portfolio of tools that enable business stakeholders, developers, QA professionals, and operations teams to easily design, test, and monitor at both the API and UI layer. Their strong support for open source communities like the OpenAPI Initiative, Swagger, SoapUI, and Selenium has been a catalyst behind their bottoms-up growth throughout organizations of every size – including Wistia, Discover, and JetBlue.

Over the last year, SmartBear product development efforts and acquisitions continue to lead and shape the market:

  • SmartBear now offers the most comprehensive set of test management offerings with its acquisition of Zephyr. Zephyr has two main products: Zephyr for Jira which supports Atlassian customers that want native test management inside their project management solution and Zephyr Enterprise that helps enable enterprise testing teams looking for a modern replacement for HP Quality Center.
  • SmartBear also acquired HipTest, the first continuous testing platform with native BDD support. HipTest empowers Agile and DevOps teams to deliver high quality software, faster, by collaborating on an idea, testing code continuously, and generating living documentation for real-time insights.
  • SmartBear added Artificial Intelligence to TestComplete to feature the industry’s first hybrid object recognition engine that marries property-based recognition with AI-powered visual recognition. This new addition eliminates common accuracy and reliability issues for UI test automation engineers when testing charts, PDFs, consoles, mainframe, and packaged applications.

To access a complimentary copy of the 2018 Gartner Magic Quadrant for Software Test Automation, visit https://smartbear.com/resources/white-papers/gartner-magic-quadrant-2018/.

1Gartner Magic Quadrant for Software Test Automation, by Joachim Herschmann, Thomas E. Murphy, Jim Scheibmeir, November 27, 2018.

About the Magic Quadrant

Gartner does not endorse any vendor, product or service depicted in its research publications,

and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About SmartBear

SmartBear is behind the software that empowers developers, testers, and operations engineers at over 20,000 of the world’s most innovative organizations including Adobe, JetBlue, MasterCard, and Microsoft. More than 6 million people use our tools to build, test, and monitor great software, faster. Our high-impact tools are easy to try, easy to buy, and easy to use. These tools are backed by a team of people passionate about helping you create software that transforms our world. Those tools are SmartBear tools. That team is SmartBear. For more information, visit: http://smartbear.com/, or follow us on LinkedIn, Twitter or Facebook.

All trademarks recognized.

Source

Ed Dipple speaks about how to get the most out of PowerShell

Ed Dipple using Powershell

When it comes to Windows system administration and automation, PowerShell is traditionally the go-to tool for IT professionals and developers.

Described as a task-based command-line shell and scripting language, it allows you to control and automate the administration of operating systems and the applications they power.

Created by Jeffrey Snover, Bruce Payette, and James Truher in 2003, PowerShell runs on the .NET framework from Microsoft and supports Windows, Linux, and MacOS devices. Its use cases have not only evolved over the years, but a large community of like-minded techies has emerged.

Ed Dipple, Lead CloudOps Engineer at DevOpsGroup, uses PowerShell extensively on a day-to-day basis. In this piece, he talks about the benefits of the tool, how you can get the most out of it, and how you can get involved in the growing PowerShell community.

Accelerating digital transformation

Managing systems isn’t an easy task, but PowerShell provides you with a consistent and simple way to control the complexity involved. Ed’s view is that it makes life as a developer easier, particularly when looking to maximise productivity and handle data.

“PowerShell was originally designed to replace the Windows Batch and VBScript languages that were historically used to automate repetitive tasks in the Windows world. What makes PowerShell valuable compared to Linux-based tools such as Bash is that everything is treated as a .NET object. This allows you to read and manipulate data easily,” he explains.

As PowerShell has evolved over the years, it’s become popular among professionals and organisations embarking on digital transformation journeys. Ed says: “PowerShell plays an incredibly valuable role within DevOps. Its use cases include server auditing, parallel server updates, creating servers in Azure, and managing Active Directory. In one language, you can automate all the systems you encounter.”

Using PowerShell

On a daily basis, Ed manages complex systems and works with a variety of enterprise clients. He uses PowerShell to streamline his workload. “I mostly use PowerShell to test and update the state of Windows servers. But it’s also handy for automating various tasks within Azure,” he says.

“More recently, I’ve been using it to model user workflows in the absence of a rich web interface. Without PowerShell, working in the Windows environment would be a more difficult experience.”

If you’re just getting started with PowerShell or are looking for things to do with the tool, Ed has some key tips. He recommends: “I’d say definitely make sure you’re using Pester. It’s a testing framework and allows you to work out if the code you’re writing actually works. You can also make use of the ISE and VS code debuggers to go through your code line-by-line and work out which elements don’t work.”

Community is at the heart of PowerShell

Since launching in the early 2000s, PowerShell has seen six major versions released to the public and has improved greatly. However, this has only been possible due to the contribution of developers globally. Ed is one of the IT pros playing an active role in the PowerShell community.

“Last year, the multi-platform version of PowerShell was released – providing compatibility for Mac OS and Linux. But at the time, it was an early alpha version and some of the popular community-supported PowerShell modules didn’t work yet – in my case, Pester and Plaster. I was involved in trying to fix these issues before the version was pushed out to more users,” he remarks.

Ed continues: “My biggest takeaway from the event was during the keynote, when Richard Siddaway said the future of PowerShell is now in the hands of the community.

“Because Microsoft has made many elements of PowerShell open-source, and because it now supports a wide range of operating systems, they can’t handle it by themselves. The PowerShell team want people to drive what the tool is and what it will become over the next few years.”

Driving innovation forward in the PowerShell world may sound like a daunting task, but getting involved in the community is actually simple. “Firstly, check out Stack Overflow and see if you can answer any questions. There’s quite a lot of questions you could help with, and you don’t have to be a PowerShell guru to get involved,” says Ed.

“If you’re using a popular PowerShell module and think it’s missing a feature, just go on GitHub, update it, and create a Pull request. Everything is written in PowerShell, so it’s easy to understand and make some changes.”

As part of the DevOps transformation journey, PowerShell plays a crucial role in automating complex tasks and giving developers greater control of systems. But what’s clear is that the future of PowerShell will be determined by the developer community. You can help to shape what it becomes and how it drives value for your organisation.

Source

How To Setup Latest Nexus On Kubernetes

Nexus is an opensource artifact storage and management system. It is a widely used tool and can be seen in most of the CI/CD workflows. We have covered Nexus setup on Linux VM in another article.

This guide will walk you through the step by step process of deploying Sonatype Nexus OSS on a Kubernetes cluster.

Setup Nexus OSS On Kubernetes

Key things to be noted,

  1. Nexus deployment and service are created in the devops-tools namespace. So make sure you have the namespace created or you can edit the YAML to deploy in a different namespace. Also, we have different deployment files for Nexus 2 & Nexus 3 versions.
  2. In this guide, we are using the volume mount for nexus data. For production workloads, you need to replace host volume mounts with persistent volumes.
  3. Service is exposed as NodePort. It can be replaced with LoadBalancer type on a cloud.

Let’s get started with the setup.

Step 1: Create a namespace called devops-tools

kubectl create namespace devops-tools

Step 2: Create a Deployment.yaml file. It is different for nexus 2.x and 3.x. We have given both. Create the YAML based on the Nexus version you need. Note: The images used in this deployment is from public official Sonatype docker repo.(Nexus2 image & Dockerfile ) (nexus 3 image & Dockerfile)

  1. Deployment YAML for Nexus 2.x: Here we are passing a few customizable ENV variable and adding a volume mount for nexus data.
    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    apiVersion: extensions/v1beta1

    kind: Deployment

    metadata:

    name: nexus

    namespace: devops-tools

    spec:

    replicas: 1

    template:

    metadata:

    labels:

    app: nexus-server

    spec:

    containers:

    – name: nexus

    image: sonatype/nexus:latest

    env:

    – name: MAX_HEAP

    value: “800m”

    – name: MIN_HEAP

    value: “300m”

    resources:

    limits:

    memory: “4Gi”

    cpu: “1000m”

    requests:

    memory: “2Gi”

    cpu: “500m”

    ports:

    – containerPort: 8081

    volumeMounts:

    – name: nexus-data

    mountPath: /sonatype-work

    volumes:

    – name: nexus-data

    emptyDir: {}

  2. Deployment YAML for Nexus 3.x: Here we dont have any custom env variables. You can check the official docker repo for the supported env variables.
    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    apiVersion: extensions/v1beta1

    kind: Deployment

    metadata:

    name: nexus

    namespace: devops-tools

    spec:

    replicas: 1

    template:

    metadata:

    labels:

    app: nexus-server

    spec:

    containers:

    – name: nexus

    image: sonatype/nexus3:latest

    resources:

    limits:

    memory: “4Gi”

    cpu: “1000m”

    requests:

    memory: “2Gi”

    cpu: “500m”

    ports:

    – containerPort: 8081

    volumeMounts:

    – name: nexus-data

    mountPath: /nexus-data

    volumes:

    – name: nexus-data

    emptyDir: {}

Step 3: Create the deployment using kubectl command.

kubectl create -f Deployment.yaml

Check the deployment pod status

kubectl get po -n devops-tools

Step 4: Create a Service.yaml file with the following contents to expose the nexus endpoint using NodePort.

Note: If you are on a cloud, you can expose the service using a load balancer using the service type Loadbalancer. Also, the Prometheus annotations will help in service endpoint monitoring by Prometheus.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

apiVersion: v1

kind: Service

metadata:

name: nexus-service

namespace: monitoring

annotations:

prometheus.io/scrape: ‘true’

prometheus.io/path: /

prometheus.io/port: ‘8081’

spec:

selector:

app: nexus-server

type: NodePort

ports:

– port: 8081

targetPort: 8081

nodePort: 32000

Check the service configuration using kubectl.

kubectl describe service nexus-service -n devops-tools

Step 5: Now you will be able to access nexus on any of the Kubernetes node IP on port 32000/nexus as we have exposed the node port. For example,

For Nexus 2,

http://35.144.130.153:32000/nexus

For Nexus 3,

http://35.144.130.153:32000

Note: The default username and password will be admin & admin123

Source

Red Hat Injects DevOps Flexibility into RHEL 8

With the beta release of the next version of Red Hat Enterprise Linux (RHEL), Red Hat is setting the stage for making it easier for DevOps teams to consume emerging operating system services without having to upgrade their entire operating system.

The beta release of RHEL 8 adds an Application Streams capability designed to make it possible to leverage userspace more flexibly, as modules running in userspace now can be updated more quickly than the core operating system.

Matt Micene, the technical product marketing manager for RHEL 8, said it’s also now possible for multiple versions of the same package—for example, an interpreted language or a database—to be made available via an application stream. That means different DevOps teams will be able to more easily work with different versions of various classes of technologies without having to wait for the next upgrade of the operating system, he said.

As part of that focus on flexibility, Red Hat is also moving to make it easier for organizations that have embraced microservices based on containers to build applications. Open source tools such as Buildah for container building, Podman for running containers and Skopeo for sharing and finding containers are now fully supported in the operating system. Red Hat is also extending support for container networking using IPVLAN, which enables containers nested in virtual machines (VMs) to access networking hosts. There’s also a new TCP/IP stack with Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control. As organizations increasingly adopt containers, many of them are starting to make operating systems decisions based on which platform has the most containerized optimized services built in.

Other capabilities being added include support for OpenSSL 1.1.1 and TLS 1.3 along with Systemwide Cryptographic Policies; a Stratis file management system for dealing with lots of volumes; a Composer tool for building custom images that can deployed on hybrid clouds; support for the Yum 4 packager; and a new web console for managing the RHEL environment that presents administrators the same user interface experience regardless of their experience.

Micene said Red Hat is trying to strike a balance between stability and IT agility. There are plenty of organizations that still prefer to upgrade their operating systems at very well-defined intervals. But there are also an increasing number of organizations that want to be able to take advantage of the latest IT innovations without having to wait for an operating system upgrade. Many of those organizations increasingly view IT as being core to their ability to compete, he said.

Over time most organizations will develop a more schizophrenic approach to IT. There will be some application workloads where stability trumps all other requirements. At the other end of the scale, innovation will be the most prized attribute. To one degree or another, every workload is going to fall along a spectrum between those two extremes. The challenge is defining the right mix of DevOps processes to serve the needs of every and any workloads regardless of its attributes.

Source

Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source

Today, we are launching support for Amazon Elastic Container Registry (Amazon ECR) as a source provider in AWS CodePipeline. You can now initiate an AWS CodePipeline pipeline update by uploading a new image to Amazon ECR. This makes it easier to set up a continuous delivery pipeline and use the AWS Developer Tools for CI/CD.

You can use Amazon ECR as a source if you’re implementing a blue/green deployment with AWS CodeDeploy from the AWS CodePipeline console. For more information about using the Amazon Elastic Container Service (Amazon ECS) console to implement a blue/green deployment without CodePipeline, see Implement Blue/Green Deployments for AWS Fargate and Amazon ECS Powered by AWS CodeDeploy.

This post shows you how to create a complete, end-to-end continuous deployment (CD) pipeline with Amazon ECR and AWS CodePipeline. It walks you through setting up a pipeline to build your images when the upstream base image is updated.

Prerequisites

To follow along, you must have these resources in place:

  • A source control repository with your base image Dockerfile and a Docker image repository to store your image. In this walkthrough, we use a simple Dockerfile for the base image:

FROM alpine:3.8

RUN apk update

RUN apk add nodejs

  • A source control repository with your application Dockerfile and source code and a Docker image repository to store your image. For the application Dockerfile, we use our base image and then add our application code:

FROM 012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image

ENV PORT=80

EXPOSE $PORT

COPY app.js /app/

CMD [“node”, “/app/app.js”]

This walkthrough uses AWS CodeCommit for the source control repositories and Amazon ECR for the Docker image repositories. For more information, see Create an AWS CodeCommit Repository in the AWS CodeCommit User Guide and Creating a Repository in the Amazon Elastic Container Registry User Guide.

Note: The source control repositories and image repositories must be created in the same AWS Region.

Set up IAM service roles

In this walkthrough you use AWS CodeBuild and AWS CodePipeline to build your Docker images and push them to Amazon ECR. Both services use Identity and Access Management (IAM) service roles to makes calls to Amazon ECR API operations. The service roles must have a policy that provides permissions to make these Amazon ECR calls. The following procedure helps you attach the required permissions to the CodeBuild service role.

To create the CodeBuild service role

  1. Follow these steps to use the IAM console to create a CodeBuild service role.
  2. On step 10, make sure to also add the AmazonEC2ContainerRegistryPowerUser policy to your role.

CodeBuild service role policies

Create a build specification file for your base image

A build specification file (or build spec) is a collection of build commands and related settings, in YAML format, that AWS CodeBuild uses to run a build. Add a buildspec.yml file to your source code repository to tell CodeBuild how to build your base image. The example build specification used here does the following:

  • Pre-build stage:
    • Sign in to Amazon ECR.
    • Set the repository URI to your ECR image and add an image tag with the first seven characters of the Git commit ID of the source.
  • Build stage:
    • Build the Docker image and tag the image with latest and the Git commit ID.
  • Post-build stage:
    • Push the image with both tags to your Amazon ECR repository.

version: 0.2

phases:
pre_build:
commands:
– echo.Logging in to Amazon ECR…
– aws –version
– $(aws ecr get-login –region $AWS_DEFAULT_REGION –no-include-email)
– REPOSITORY_URI=012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image
– COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
– IMAGE_TAG=$
build:
commands:
– echo Build started on `date`
– echo Building the Docker image…
– docker build -t $REPOSITORY_URI:latest .
– docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
– echo Build completed on `date`
– echo Pushing the Docker images…
– docker push $REPOSITORY_URI:latest
– docker push $REPOSITORY_URI:$IMAGE_TAG

To add a buildspec.yml file to your source repository

  1. Open a text editor and then copy and paste the build specification above into a new file.
  2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image) with your Amazon ECR repository URI (without any image tag) for your Docker image. Replace base-image with the name for your base Docker image.
  3. Commit and push your buildspec.yml file to your source repository.

git add .
git commit -m “Adding build specification.”
git push

Create a build specification file for your application

Add a buildspec.yml file to your source code repository to tell CodeBuild how to build your source code and your application image. The example build specification used here does the following:

  • Pre-build stage:
    • Sign in to Amazon ECR.
    • Set the repository URI to your ECR image and add an image tag with the first seven characters of the CodeBuild build ID.
  • Build stage:
    • Build the Docker image and tag the image with latest and the Git commit ID.
  • Post-build stage:
    • Push the image with both tags to your ECR repository.

version: 0.2

phases:
pre_build:
commands:
– echo Logging in to Amazon ECR…
– aws –version
– $(aws ecr get-login –region $AWS_DEFAULT_REGION –no-include-email)
– REPOSITORY_URI=012345678910.dkr.ecr.us-east-1.amazonaws.com/hello-world
– COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
– IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F”:” ”)
build:
commands:
– echo Build started on `date`
– echo Building the Docker image…
– docker build -t $REPOSITORY_URI:latest .
– docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
– echo Build completed on `date`
– echo Pushing the Docker images…
– docker push $REPOSITORY_URI:latest
– docker push $REPOSITORY_URI:$IMAGE_TAG
artifacts:
files:
– imageDetail.json

To add a buildspec.yml file to your source repository

  1. Open a text editor and then copy and paste the build specification above into a new file.
  2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-east-1.amazonaws.com/hello-world) with your Amazon ECR repository URI (without any image tag) for your Docker image. Replace hello-world with the container name in your service’s task definition that references your Docker image.
  3. Commit and push your buildspec.yml file to your source repository.

git add .
git commit -m “Adding build specification.”
git push

Create a continuous deployment pipeline for your base image

Use the AWS CodePipeline wizard to create your pipeline stages:

  1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  2. On the Welcome page, choose Create pipeline.
    If this is your first time using AWS CodePipeline, an introductory page appears instead of Welcome. Choose Get Started Now.
  3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline and choose Next step. For this walkthrough, the pipeline name is base-image.
  4. On the Step 2: Source page, for Source provider, choose AWS CodeCommit.
    1. For Repository name, choose the name of the AWS CodeCommit repository to use as the source location for your pipeline.
    2. For Branch name, choose the branch to use, and then choose Next step.
  5. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create project.
    1. For Project name, choose a unique name for your build project. For this walkthrough, the project name is base-image.
    2. For Operating system, choose Ubuntu.
    3. For Runtime, choose Docker.
    4. For Version, choose aws/codebuild/docker:17.09.0.
    5. For Service role, choose Existing service role, choose the CodeBuild service role you’ve created earlier, and then clear the Allow AWS CodeBuild to modify this service role so it can be used with this build project box.
    6. Choose Continue to CodePipeline.
    7. Choose Next.
  6. On the Step 4: Deploy page, choose Skip and acknowledge the pop-up warning.
  7. On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline.

Base image pipeline

Create a continuous deployment pipeline for your application image

The execution of the application image pipeline is triggered by changes to the application source code and changes to the upstream base image. You first create a pipeline, and then edit it to add a second source stage.

    1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
    2. On the Welcome page, choose Create pipeline.
    3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline, and then choose Next step. For this walkthrough, the pipeline name is hello-world.
    4. For Service role, choose Existing service role, and then choose the CodePipeline service role you modified earlier.
    5. On the Step 2: Source page, for Source provider, choose Amazon ECR.
      1. For Repository name, choose the name of the Amazon ECR repository to use as the source location for your pipeline. For this walkthrough, the repository name is base-image.

Amazon ECR source configuration

  1. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create project.
    1. For Project name, choose a unique name for your build project. For this walkthrough, the project name is hello-world.
    2. For Operating system, choose Ubuntu.
    3. For Runtime, choose Docker.
    4. For Version, choose aws/codebuild/docker:17.09.0.
    5. For Service role, choose Existing service role, choose the CodeBuild service role you’ve created earlier, and then clear the Allow AWS CodeBuild to modify this service role so it can be used with this build project box.
    6. Choose Continue to CodePipeline.
    7. Choose Next.
  2. On the Step 4: Deploy page, choose Skip and acknowledge the pop-up warning.
  3. On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline.

The pipeline will fail, because it is missing the application source code. Next, you edit the pipeline to add an additional action to the source stage.

  1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  2. On the Welcome page, choose your pipeline from the list. For this walkthrough, the pipeline name is hello-world.
  3. On the pipeline page, choose Edit.
  4. On the Editing: hello-world page, in Edit: Source, choose Edit stage.
  5. Choose the existing source action, and choose the edit icon.
    1. Change Output artifacts to BaseImage, and then choose Save.
  6. Choose Add action, and then enter a name for the action (for example, Code).
    1. For Action provider, choose AWS CodeCommit.
    2. For Repository name, choose the name of the AWS CodeCommit repository for your application source code.
    3. For Branch name, choose the branch.
    4. For Output artifacts, specify SourceArtifact, and then choose Save.
  7. On the Editing: hello-world page, choose Save and acknowledge the pop-up warning.

Application image pipeline

Test your end-to-end pipeline

Your pipeline should have everything for running an end-to-end native AWS continuous deployment. Now, test its functionality by pushing a code change to your base image repository.

  1. Make a change to your configured source repository, and then commit and push the change.
  2. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  3. Choose your pipeline from the list.
  4. Watch the pipeline progress through its stages. As the base image is built and pushed to Amazon ECR, see how the second pipeline is triggered, too. When the execution of your pipeline is complete, your application image is pushed to Amazon ECR, and you are now ready to deploy your application. For more information about continuously deploying your application, see Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment in the AWS CodePipeline User Guide.

Conclusion

In this post, we showed you how to create a complete, end-to-end continuous deployment (CD) pipeline with Amazon ECR and AWS CodePipeline. You saw how to initiate an AWS CodePipeline pipeline update by uploading a new image to Amazon ECR. Support for Amazon ECR in AWS CodePipeline makes it easier to set up a continuous delivery pipeline and use the AWS Developer Tools for CI/CD.

Source

The Latest Trends for Modern Applications Built on AWS

In Sumo Logic’s latest report, The State of Modern Applications & DevSecOps in the Cloud , we were able to take a unique perspective of how companies are continuing to build using AWS and which emerging technologies and trends are rising in adoption. Sumo Logic is in a unique position to get this data — and adoption insights — by ingesting data from over 1,600 customers and aggregating and anonymizing the data to get a holistic view on how modern companies are continuing to innovate.

Based on our data, 70 percent of our customers are solely building on AWS while another 9 percent are using a multi-cloud approach. Segmenting the whole dataset on AWS quickly shows which technologies are most prevalent within AWS environments. We pulled data on:

  • AWS Lambda and Serverless Adoption
  • Docker, Kubernetes, and Containers
  • Popular Databases within AWS
  • AWS CloudTrail, VPC Flow and GuardDuty Adoption

AWS Lambda and Serverless Adoption

Introduced four years ago at re:Invent, AWS Lambda is an event-driven technology offered by AWS. It’s mostly used for on-demand modern applications needing to run quickly and often. With the rise of IoT devices, Lambda has proved to be a helpful framework for dealing with multiple APIs.

Sumo Logic has been tracking Lambda adoption for the past three years and we’ve seen the growth both anecdotally and with data. The use of Lambda in production has more than doubled since 2016 and nearly a third of AWS applications now use Lambda.

Despite the adoption of Lambda by AWS users, there is still a lot of uncertainty out there in the industry regarding how serverless computing fits into the overall modern application stack. As DevSecOps continues to grow and data become a strategic tool for more than just the operations, development and security departments, it’s also become a crucial business resource. As we continue to see data democratization occur across all lines of the business, customers are no longer concerned about where their applications reside, which is why many believe serverless is the perfect vehicle for making that data readily available to everyone.

Docker and Containers in AWS

Amazon also introduced EC2 Container Service at re:Invent 2014 (along with AWS Lambda). Containers and orchestration technology allows developers to push code in packages, encouraging microservices and allowing teams to ship more frequently.

The benefits are obvious and unsurprisingly increasing in adoption among teams building on AWS. Both ECS and Kubernetes adoption grew 6 percent over the last year.

One of the first container technologies and typically the most prevalent, Docker, continues to grow their use in AWS. More than 25 percent of enterprises use Docker containers in AWS.

Database Usage in AWS

I wrote recently about the split between NoSQL and relational database use in AWS. Modern enterprises are continuing to use NoSQL databases for their flexibility and scalability. These technology choices are often at the expense of legacy tools like Oracle and Microsoft.

In the above article I said, “DevOps teams need more agile and flexible tools built for continuous deployment. These buzzy trends are having tangible impacts on legacy tools, notably databases. Oracle, once the dominant database in the market, isn’t in as high of demand as more teams opt to use NoSQL databases in AWS and other public cloud environments.”

AWS Security Tools of Choice

As hacks and breaches become more mainstream and are becoming seemingly more frequent, companies have taken an increased focus on their cloud security. It’s a top concern for companies adopting a public cloud strategy as well. We’re relieved to find more than half (56 percent) of cloud enterprises are taking advantage of the AWS audit service, CloudTrail. We also found 29 percent of enterprises are using VPC Flow Logs to bolster their security efforts.

Conversely, we’ve seen the adoption of threat intelligence services such as AWS GuardDuty (10 percent) or the native Sumo Logic Threat Intelligence powered by CrowdStrike (17 percent) being utilized in a smaller subset of enterprises — a little more than one in four. Threat intelligence tools provide an extra layer of security by using proprietary machine learning analysis to monitor and thwart attacks.

This is an improvement over last year’s report, however, we still have a long way to go in order to continue to stay abreast of new and emerging threats and take the necessary steps to implement the right security monitoring tools to harden our overall security posture.

Source

Use AWS CodeDeploy to Implement Blue/Green Deployments for AWS Fargate and Amazon ECS

We are pleased to announce support for blue/green deployments for services hosted using AWS Fargate and Amazon Elastic Container Service (Amazon ECS).

In AWS CodeDeploy, blue/green deployments help you minimize downtime during application updates. They allow you to launch a new version of your application alongside the old version and test the new version before you reroute traffic to it. You can also monitor the deployment process and, if there is an issue, quickly roll back.

With this new capability, you can create a new service in AWS Fargate or Amazon ECS that uses CodeDeploy to manage the deployments, testing, and traffic cutover for you. When you make updates to your service, CodeDeploy triggers a deployment. This deployment, in coordination with Amazon ECS, deploys the new version of your service to the green target group, updates the listeners on your load balancer to allow you to test this new version, and performs the cutover if the health checks pass.

In this post, I show you how to configure blue/green deployments for AWS Fargate and Amazon ECS using AWS CodeDeploy. For information about how to automate this end-to-end using a continuous delivery pipeline in AWS CodePipeline and Amazon ECR, read Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source.

Let’s dive in!

Prerequisites

To follow along, you must have these resources in place:

  • A Docker image repository that contains an image you have built from your Dockerfile and application source. This walkthrough uses Amazon ECR. For more information, see Creating a Repository and Pushing an Image in the Amazon Elastic Container Registry User Guide.
  • An Amazon ECS cluster. You can use the default cluster created for you when you first use Amazon ECS or, on the Clusters page of the Amazon ECS console, you can choose a Networking only cluster. For more information, see Creating a Cluster in the Amazon Elastic Container Service User Guide.

Note: The image repository and cluster must be created in the same AWS Region.

Set up IAM service roles

Because you will be using AWS CodeDeploy to handle the deployments of your application to Amazon ECS, AWS CodeDeploy needs permissions to call Amazon ECS APIs, modify your load balancers, invoke Lambda functions, and describe CloudWatch alarms. Before you create an Amazon ECS service that uses the blue/green deployment type, you must create the AWS CodeDeploy IAM role (ecsCodeDeployRole). For instructions, see Amazon ECS CodeDeploy IAM Role in the Amazon ECS Developer Guide.

Create an Application Load Balancer

To allow AWS CodeDeploy and Amazon ECS to control the flow of traffic to multiple versions of your Amazon ECS service, you must create an Application Load Balancer.

Follow the steps in Creating an Application Load Balancer and make the following modifications:

  1. For step 6a in the Define Your Load Balancer section, name your load balancer sample-website-alb.
  2. For step 2 in the Configure Security Groups section:
    1. For Security group name, enter sample-website-sg.
    2. Add an additional rule to allow TCP port 8080 from anywhere (0.0.0.0/0).
  3. In the Configure Routing section:
    1. For Name, enter sample-website-tg-1.
    2. For Target type, choose to register your targets with an IP address.
  4. Skip the steps in the Create a Security Group Rule for Your Container Instances section.

Create an Amazon ECS task definition

Create an Amazon ECS task definition that references the Docker image hosted in your image repository. For the sake of this walkthrough, we use the Fargate launch type and the following task definition.

{
“executionRoleArn”: “arn:aws:iam::account_ID:role/ecsTaskExecutionRole”,
“containerDefinitions”: [{
“name”: “sample-website”,
“image”: “<YOUR ECR REPOSITORY URI>”,
“essential”: true,
“portMappings”: [{
“hostPort”: 80,
“protocol”: “tcp”,
“containerPort”: 80
}]
}],
“requiresCompatibilities”: [
“FARGATE”
],
“networkMode”: “awsvpc”,
“cpu”: “256”,
“memory”: “512”,
“family”: “sample-website”
}

Note: Be sure to change the value for “image” to the Amazon ECR repository URI for the image you created and uploaded to Amazon ECR in Prerequisites.

Creating an Amazon ECS service with blue/green deployments

Now that you have completed the prerequisites and setup steps, you are ready to create an Amazon ECS service with blue/green deployment support from AWS CodeDeploy.

Create an Amazon ECS service

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. From the list of clusters, choose the Amazon ECS cluster you created to run your tasks.
  3. On the Services tab, choose Create.

This opens the Configure service wizard. From here you are able to configure everything required to deploy, run, and update your application using AWS Fargate and AWS CodeDeploy.

  1. Under Configure service:
    1. For the Launch type, choose FARGATE.
    2. For Task Definition, choose the sample-website task definition that you created earlier.
    3. Choose the cluster where you want to run your applications tasks.
    4. For Service Name, enter Sample-Website.
    5. For Number of tasks, specify the number of tasks that you want your service to run.
  2. Under Deployments:
    1. For Deployment type, choose Blue/green deployment (powered by AWS CodeDeploy). This creates a CodeDeploy application and deployment group using the default settings. You can see and edit these settings in the CodeDeploy console later.
    2. For the service role, choose the CodeDeploy service role you created earlier.
  3. Choose Next step.
  4. Under VPC and security groups:
    1. From Subnets, choose the subnets that you want to use for your service.
    2. For Security groups, choose Edit.
      1. For Assigned security groups, choose Select existing security group.
      2. Under Existing security groups, choose the sample-website-sg group that you created earlier.
      3. Choose Save.
  5. Under Load Balancing:
    1. Choose Application Load Balancer.
    2. For Load balancer name, choose sample-website-alb.
  6. Under Container to load balance:
    1. Choose Add to load balancer.
    2. For Production listener port, choose 80:HTTP from the first drop-down list.
    3. For Test listener port, in Enter a listener port, enter 8080.
  7. Under Additional configuration:
    1. For Target group 1 name, choose sample-website-tg-1.
    2. For Target group 2 name, enter sample-website-tg-2.
  8. Under Service discovery (optional), clear Enable service discovery integration, and then choose Next step.
  9. Do not configure Auto Scaling. Choose Next step.
  10. Review your service for accuracy, and then choose Create service.
  11. If everything is created successfully, choose View service.

You should now see your newly created service, with at least one task running.

When you choose the Events tab, you should see that Amazon ECS has deployed the tasks to your sample-website-tg-1 target group. When you refresh, you should see your service reach a steady state.

In the AWS CodeDeploy console, you will see that the Amazon ECS Configure service wizard has created a CodeDeploy application for you. Click into the application to see other details, including the deployment group that was created for you.

If you click the deployment group name, you can view other details about your deployment. Under Deployment type, you’ll see Blue/green. Under Deployment configuration, you’ll see CodeDeployDefault.ECSAllAtOnce. This indicates that after the health checks are passed, CodeDeploy updates the listeners on the Application Load Balancer to send 100% of the traffic over to the green environment.

Under Load Balancing, you can see details about your target groups and your production and test listener ARNs.

Let’s apply an update to your service to see the CodeDeploy deployment in action.

Trigger a CodeDeploy blue/green deployment

Create a revised task definition

To test the deployment, create a revision to your task definition for your application.

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. From the navigation pane, choose Task Definitions.
  3. Choose your sample-website task definition, and then choose Create new revision.
  4. Under Tags:
    1. In Add key, enter Name.
    2. In Add value, enter Sample Website.
  5. Choose Create.

Update ECS service

You now need to update your Amazon ECS service to use the latest revision of your task definition.

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. Choose the Amazon ECS cluster where you’ve deployed your Amazon ECS service.
  3. Select the check box next to your sample-website service.
  4. Choose Update to open the Update Service wizard.
    1. Under Configure service, for Task Definition, choose 2 (latest) from the Revision drop-down list.
  5. Choose Next step.
  6. Skip Configure deployments. Choose Next step.
  7. Skip Configure network. Choose Next step.
  8. Skip Set Auto Scaling (optional). Choose Next step.
  9. Review the changes, and then choose Update Service.
  10. Choose View Service.

You are now be taken to the Deployments tab of your service where you can see details about your blue/green deployment.

You can click the deployment ID to go to the details view for the CodeDeploy deployment.

From there you can see the deployments status:

You can also see the progress of the traffic shifting:

If you notice issues, you can stop and roll back the deployment. This shifts traffic back to the original (blue) task set and stops the deployment.

By default, CodeDeploy waits one hour after a successful deployment before it terminates the original task set. You can use the AWS CodeDeploy console to shorten this interval. After the task set is terminated, CodeDeploy marks the deployment complete.

Conclusion

In this post, I showed you how to create an AWS Fargate-based Amazon ECS service with blue/green deployments powered by AWS CodeDeploy. I showed you how to configure the required and prerequisite components, such as an Application Load Balancer and associated targets groups, all from the AWS Management Console. I hope that the information in this posts helps you get started implementing this for your own applications!

Source

Run your CI with Jenkins and CD with Azure DevOps – Microsoft DevOps Blog

Azure release pipelines provide you with the first-class experience to integrate with Jenkins. You can have Jenkins as your Continuous Integration (CI) system and use Azure DevOps Release for your Continuous Deployment (CD) and get all the benefits of Azure DevOps like:

  • End to end traceability for your CI/CD workflow.
  • Track you commits and work-items.
  • Enable manual approvals and deployment gates on your pipeline.
  • Deploy to various services (Azure) via Azure pipelines.

In this example you will build a Java web app using Jenkins and deploy to Azure Linux VM using DevOps Azure Pipelines.

Ensure the repo where your code is hosted (Github, GHE or Gitlab) is linked with your Jenkins project. Also, please ensure your JIRA server plugin is installed on Jenkins so that you can track your JIRA issues.

Now you can configure Jenkins with Azure DevOps to run your Continuous Deployment workflows.

  1. Install Team Foundation Server plugin on Jenkins.
  2. Now inside your Jenkins project, you will find a new post build action “Trigger release in TFS/Team Services”. Add the action into your project.
  3. Provide the collection url as – https://<accountname>.visualstudio.com/DefaultCollection/
  4. Create a credential(username/password) for Azure Devops with PAT as password. Leave username as empty. Pass this credential to the action.
  5. Now you can select the Azure DevOps project and Release definition from the dropdowns. Choose the Release Definition you want to trigger upon completion on this Jenkins job.

Now a new release (CD) gets triggered every time your Jenkins CI job is completed.

However, to consume Jenkins job, you need to define Jenkins as artifact source in Azure DevOps.

  1. Go to your Azure DevOps project and create a new service connection for Jenkins.
  2. Now, go to your Release pipeline and add a new Jenkins artifact source and select the Jenkins job you want to consume as part of your pipeline.
  3. Every time you run your pipeline, artifacts from your Jenkins job are downloaded automatically and made available for you. Also, all the associated commits and JIRA issues are also extracted so that you can compare between release deployments and get the full traceability of your workflow.

That’s it and now you have a complete DevOps workflow with Jenkins as CI and Azure DevOps as CD and you can get full traceability of your workflow.

You can take advantage of Azure DevOps release features like approvals and deployment gates, integrate with Service Now and deploy to Azure, AWS or deploy to Linux VMs using Ansible and many more.

Source

Cisco Looks to Build DevOps Community

Cisco Systems is making a concerted effort to create an era of détente between networking professionals and the rest of the IT operations teams that have embraced DevOps practices.

Susie Wee, senior vice president for the DevNet community at Cisco, said the goal is to provide network operations (NetOps) teams the skills required to programmatically expose a range of self-service capabilities to developers.

At the same time, Cisco is encouraging developers to deploy applications directly on routers and switches that now incorporate general-purpose processors. Those processors would run applications in a way that significantly reduces network latency, while the custom ASICs that Cisco develops would continue to handle all the networking functions.

Wee said that dichotomy is now driving Cisco to classify two classes of developers within its DevNet community. The first is aimed at traditional application developers and the second community is more focused on managing infrastructure as code.

Cisco is now extending that latter initiative into the realm of software-defined wide area networks (SD-WANs). The company has added integrated firewall, intrusion prevention and URL-filtering technologies to its SD-WAN platform. In addition, there are code samples, videos, learning labs and sandboxes available on Cisco DevNet that are intended to make easier to learn how to programmatically manage Cisco SD-WANs using open application programming interfaces (APIs).

Cisco claims to now have 530,000 developers participating in DevNet. One the one hand, the company is trying to entice NetOps teams that have been reluctant to give up legacy command line interfaces to modernize their networking environments. On the other hand, Cisco is trying to attract more application developers who increasingly find themselves encountering latency issues as they move to deploy applications in highly distributed computing environments, such as an Internet of Things (IoT) application.

Over time, Cisco and other providers of network operating systems also will be moving away from monolithic architecture to embrace microservices, which would make it possible to programmatically invoke networking services on a more granular level.

Of course, there may be some developers who would prefer to programmatically take control over networking along with the rest of the IT infrastructure. But Cisco is clearly betting most organizations will want to make it easier for their NetOps and DevOps teams to work more closely together. Today, most enterprise IT organizations can spin up a virtual machine in a matter of minutes; provisioning the network connections to that virtual machine, however, is still measured in days and weeks.

Other networking infrastructure companies besides Cisco have realized the need to make networking more programmable. But in terms of the resources the company is making available to solve that problem, it would appear to ready to make a significantly larger commitment.
Source

Analyzing the DNA of DevOps

If you were to analyze the DNA of DevOps, what would you find in its ancestry report?

This article is not a methodology bake-off, so if you are looking for advice or a debate on the best approach to software engineering, you can stop reading here. Rather, we are going to explore the genetic sequences that have brought DevOps to the forefront of today’s digital transformations.

Much of DevOps has evolved through trial and error, as companies have struggled to be responsive to customers’ demands while improving quality and standing out in an increasingly competitive marketplace. Adding to the challenge is the transition from a product-driven to a service-driven global economy that connects people in new ways. The software development lifecycle is becoming an increasingly complex system of services and microservices, both interconnected and instrumented. As DevOps is pushed further and faster than ever, the speed of change is wiping out slower traditional methodologies like waterfall.

We are not slamming the waterfall approach—many organizations have valid reasons to continue using it. However, mature organizations should aim to move away from wasteful processes, and indeed, many startups have a competitive edge over companies that use more traditional approaches in their day-to-day operations.

Ironically, lean, Kanban, continuous, and agile principles and processes trace back to the early 1940’s, so DevOps cannot claim to be a completely new idea.

Let’s start by stepping back a few years and looking at the waterfall, lean, and agile software development approaches. The figure below shows a “haplogroup” of the software development lifecycle. (Remember, we are not looking for the best approach but trying to understand which approach has positively influenced our combined 67 years of software engineering and the evolution to a DevOps mindset.)

“A fool with a tool is still a fool.” -Mathew Mathai

The traditional waterfall method

From our perspective, the oldest genetic material comes from the waterfall model, first introduced by Dr. Winston W. Royce in a paper published in the 1970’s.

Like a waterfall, this approach emphasizes a logical and sequential progression through requirements, analysis, coding, testing, and operations in a single pass. You must complete each sequence, meet criteria, and obtain a signoff before you can begin the next one. The waterfall approach benefits projects that need stringent sequences and that have a detailed and predictable scope and milestone-based development. Contrary to popular belief, it also allows teams to experiment and make early design changes during the requirements, analysis, and design stages.

Lean thinking

Although lean thinking dates to the Venetian Arsenal in the 1450s, we start the clock when Toyota created the Toyota Production System, developed by Japanese engineers between 1948 and 1972. Toyota published an official description of the system in 1992.

Lean thinking is based on five principles: value, value stream, flow, pull, and perfection. The core of this approach is to understand and support an effective value stream, eliminate waste, and deliver continuous value to the user. It is about delighting your users without interruption.

Kaizen

Kaizen is based on incremental improvements; the Plan->Do->Check->Act lifecycle moved companies toward a continuous improvement mindset. Originally developed to improve the flow and processes of the assembly line, the Kaizen concept also adds value across the supply chain. The Toyota Production system was one of the early implementors of Kaizen and continuous improvement. Kaizen and DevOps work well together in environments where workflow goes from design to production. Kaizen focuses on two areas:

  • Flow
  • Process

Continuous delivery

Kaizen inspired the development of processes and tools to automate production. Companies were able to speed up production and improve the quality, design, build, test, and delivery phases by removing waste (including culture and mindset) and automating as much as possible using machines, software, and robotics. Much of the Kaizen philosophy also applies to lean business and software practices and continuous delivery deployment for DevOps principles and goals.

Agile

The Manifesto for Agile Software Development appeared in 2001, authored by Alistair Cockburn, Bob Martin, Jeff Sutherland, Jim Highsmith, Ken Schwaber, Kent Beck, Ward Cunningham, and others.

Agile is not about throwing caution to the wind, ditching design, or building software in the Wild West. It is about being able to create and respond to change. Agile development is based on twelve principles and a manifesto that values individuals and collaboration, working software, customer collaboration, and responding to change.

Disciplined agile

Since the Agile Manifesto has remained static for 20 years, many agile practitioners have looked for ways to add choice and subjectivity to the approach. Additionally, the Agile Manifesto focuses heavily on development, so a tweak toward solutions rather than code or software is especially needed in today’s fast-paced development environment. Scott Ambler and Mark Lines co-authored Disciplined Agile Delivery and The Disciplined Agile Framework, based on their experiences at Rational, IBM, and organizations in which teams needed more choice or were not mature enough to implement lean practices, or where context didn’t fit the lifecycle.

The significance of DAD and DA is that it is a process-decision framework that enables simplified process decisions around incremental and iterative solution delivery. DAD builds on the many practices of agile software development, including scrum, agile modeling, lean software development, and others. The extensive use of agile modeling and refactoring, including encouraging automation through test-driven development (TDD), lean thinking such as Kanban, XP, scrum, and RUP through a choice of five agile lifecycles, and the introduction of the architect owner, gives agile practitioners added mindsets, processes, and tools to successfully implement DevOps.

DevOps

As far as we can gather, DevOps emerged during a series of DevOpsDays in Belgium in 2009, going on to become the foundation for numerous digital transformations. Microsoft principal DevOps manager Donovan Brown defines DevOps as “the union of people, process, and products to enable continuous delivery of value to our end users.”

Let’s go back to our original question: What would you find in the ancestry report of DevOps if you analyzed its DNA?

We are looking at history dating back 80, 48, 26, and 17 years—an eternity in today’s fast-paced and often turbulent environment. By nature, we humans continuously experiment, learn, and adapt, inheriting strengths and resolving weaknesses from our genetic strands.

Under the microscope, we will find traces of waterfall, lean thinking, agile, scrum, Kanban, and other genetic material. For example, there are traces of waterfall for detailed and predictable scope, traces of lean for cutting waste, and traces of agile for promoting increments of shippable code. The genetic strands that define when and how to ship the code are where DevOps lights up in our DNA exploration.

You use the telemetry you collect from watching your solution in production to drive experiments, confirm hypotheses, and prioritize your product backlog. In other words, DevOps inherits from a variety of proven and evolving frameworks and enables you to transform your culture, use products as enablers, and most importantly, delight your customers.

If you are comfortable with lean thinking and agile, you will enjoy the full benefits of DevOps. If you come from a waterfall environment, you will receive help from a DevOps mindset, but your lean and agile counterparts will outperform you.

eDevOps

In 2016, Brent Reed coined the term eDevOps (no Google or Wikipedia references exist to date), defining it as “a way of working (WoW) that brings continuous improvement across the enterprise seamlessly, through people, processes and tools.”

Brent found that agile was failing in IT: Businesses that had adopted lean thinking were not achieving the value, focus, and velocity they expected from their trusted IT experts. Frustrated at seeing an “ivory tower” in which siloed IT services were disconnected from architecture, development, operations, and help desk support teams, he applied his practical knowledge of disciplined agile delivery and added some goals and practical applications to the DAD toolset, including:

  • Focus and drive of culture through a continuous improvement (Kaizen) mindset, bringing people together even when they are across the cubicle
  • Velocity through automation (TDD + refactoring everything possible), removing waste and adopting a TOGAF, JBGE (just barely good enough) approach to documentation
  • Value through modeling (architecture modeling) and shifting left to enable right through exposing anti-patterns while sharing through collaboration patterns in a more versatile and strategic modern digital repository

Using his experience with AI at IBM, Brent designed a maturity model for eDevOps that incrementally automates dashboards for measuring and decision-making purposes so that continuous improvement through a continuous deployment (automating from development to production) is a real possibility for any organization. eDevOps in an effective transformation program based on disciplined DevOps that enables:

  • Business to DevOps (BizDevOps),
  • Security to DevOps (SecDevOps)
  • Information to DevOps (DataDevOps)
  • Loosely coupled technical services while bringing together and delighting all stakeholders
  • Building potentially consumable solutions every two weeks or faster
  • Collecting, measuring, analyzing, displaying, and automating actionable insight through the DevOps processes from concept through live production use
  • Continuous improvement following a Kaizen and disciplined agile approach

The next stage in the development of DevOps

Will DevOps ultimately be considered hype—a collection of more tech thrown at corporations and added to the already extensive list of buzzwords? Time, of course, will tell how DevOps will progress. However, DevOps’ DNA must continue to mature and be refined, and developers must understand that it is neither a silver bullet nor a remedy to cure all ailments and solve all problems.

DevOps != Agile != Lean Thinking != Waterfall

DevOps != Tools !=Technology

DevOps Ì Agile Ì Lean Thinking Ì Waterfall

Source