What will be the key DevOps trends in 2019?

What will be the key DevOps trends in 2019?

What will be the key DevOps trends for the year ahead?

As boardrooms wake up to the race for digital transformation and their consumers demand a new digital journey, software development teams need to step up to the challenge.

The last 12 months have seen further innovation in DevOps, along with the acceleration of cloud adoption and advances in technologies, such as containerisation and microservices.

Automation and AI

No prizes for my first prediction. Automation technologies will continue to extend further into the DevOps pipeline to deliver a quicker time-to-market with reduced errors, freeing up staff to focus on innovation as well as cutting costs. Automation will also become more prevalent in areas such as predictive software development.

Artificial intelligence is already starting to facilitate predictive analytics and coding to replace manually intensive tasks with intelligent insights, recommendations and automation. And as more testing is required to identify and rectify performance and security issues in production, the use of test automation and test-driven development will also increase.

Continuous monitoring of application use and performance will power feedback loops to highlight and address problems instantly.

Competition drives consolidation

2018 has seen industry consolidation with the acquisition of GitHub and Red Hat, as the big vendors bolster their DevOps propositions and aim to become the go-to ‘one-stop-shop’. No doubt we will see more of this strategy in 2019 as companies strengthen their position with customers and reduce integration headaches between tools.

But as the software development and DevOps communities are well known for getting behind popular tools, new leaders have emerged. It is likely we will see more from the large cloud providers as they entice developers to their platforms.

Containerisation and microservices

The growth in microservices and distributed systems has driven a massive increase in container workloads. Running containers in production is becoming standard practice and Kubernetes has been widely adopted as the container orchestration tool of choice, evidenced by the launch of Azure AKS and AWS EKS.

We will see legacy platforms start to become containerised or replaced as the benefits of this technology are now accepted. As organisations move from monolith to microservices, serverless computing – or Functions-as-a-Service – offered by leading cloud providers, will be used increasingly to focus on business logic rather than infrastructure.

Adoption will accelerate as organisations realise they can deliver applications at record speed and lower cost and this trend will undoubtedly generate increased workloads on cloud platforms. However, there is a risk of locking in applications to platform APIs and other services, which will be a concern for some.

The increasing use of microservices and distributed containerised systems bring more pressure on the network, which must be as agile and automation-ready as the applications running on it, rather than create a bottleneck.

One of the technologies for 2019 that promises to deliver high-performance, failsafe, dynamic and flexible networks but without the cost of legacy MPLS, is SD WAN. We will also see a growth in multi-hybrid cloud adoption as it no longer becomes simply a choice of cloud provider, but a means to optimise the cost, performance, architecture and location.

DevOps at scale

As business executives focus on speed and competitiveness, successful DevOps initiatives will drive more focus and investment. We will see DevOps scale to more teams and projects, with additional governance as businesses shift to enterprise tooling. Open Source has been very popular and an effective way to prove the capability of tools to automate the DevOps pipeline, but enterprises now need the assurance of stability, security, scalability and compatible integrations as this becomes a critical function.

Security and DevSecOps

There are over 100 billion lines of new code being produced every year – and each one can introduce a new vulnerability. Standard security checks are not enough anymore, especially with increasing audit and compliance requirements.

Security is increasingly shifting towards developers, to ensure applications are secure by design and vulnerabilities introduced by the increasing use of open source components are captured and addressed. If the right tools are not in place to identify vulnerable software, attackers will find and exploit these weaknesses first.

Marlene Spensley, application optimisation practice lead, Nuvias

Barnaby Dracup is an experienced print and digital editor who has taken the helm at TEST Magazine and its affiliated products. Coming from a consumer tech background, Barnaby is looking forward to expanding his knowledge of the Software Testing/QA and DevOps industries and cultivating lasting relationships with readers, delegates and sponsors alike.

Source

Split Launches Free Managed Feature-Flag Service

Split Launches Free Service

As part of an effort to attract more DevOps teams, Split is making available a free managed feature-flag service for up to five users within an organization.

Based on the same commercial managed service Split provides today, company president Trevor Stuart said the free service provides those five users with unlimited feature-flagging and unlimited usage in the hopes that the number of DevOps teams employing feature-flagging will rise to the point where the organization will opt to pay to consume the paid managed service. The free managed service also includes user segmentation capabilities to provide control over how many end users might be exposed to any new feature.

The paid managed service provides access to additional capabilities such as integrations, access and permissions, team collaboration and full-stack experimentation.

Stuart noted many DevOps teams are today attempting to incorporate feature-flagging into their processes by implementing one of more than 40 different open source projects. However, many of them quickly discover that building and maintaining a feature-flag platform is a difficult undertaking. Many developers decide that building a feature-flagging platform is a challenge they want to take on because it provides a critical set of DevOps capabilities. But when those developers leave the organization, the DevOps team is suddenly left with a platform no one else in the organization knows how to support, he said.

Feature flags, also known as feature toggles, provide a means for DevOps teams to test and add new capabilities to their applications without having to take the entire application offline. Feature-flagging also enables phased application rollouts that make it easier for DevOps teams to assess new functionality as it is added. Any problematic update can be turned off with a single click. The downside is they generally result in more branches of applications that need to be managed carefully within the context of a DevOps process. Because of that complexity, vendors such as Split have been making the case for employing platforms that enable DevOps teams to more easily incorporate feature-flagging within a DevOps process.

Stuart said the ultimate goal is to provide a more controlled approach to experimentation. Instead of randomly testing new features on end users, feature-flagging makes it possible to more easily test features on specific end users, said Stuart.

It may be a while longer before feature-flagging is adopted more pervasively. But as DevOps teams become increasingly more sophisticated, there’s little doubt feature-flagging won’t become more commonly employed. The simple truth is that uncontrolled application experiments on end users usually are not appreciated. Savvy DevOps teams are finding ways to test application functions in real-world scenarios with minimal disruption to the majority of end users.

The only real issue now is determining how best to go about implementing feature-flagging at a time when most DevOps teams already are struggling to keep pace with their existing application development and deployment projects.

Source

Top 3 cloud computing predictions

Top 3 cloud computing predictions

In 2018, we saw the emergence of several important pivot points that changed the trajectory of cloud in a big way, making it an even more important part of enterprises’ core IT strategies for 2019.

Top 3 cloud computing predictions

1. The rise of multi-cloud

As cloud has become a default paradigm, the reality of making bets on multiple providers and then figuring out how those clouds work together seamlessly is a key question CIOs and CTOs have been asking themselves.

Next year, we expect businesses to make bets on multiple cloud platforms, and cloud providers need to work together to deliver a seamless experience.

Vendors will be looking at a variety of considerations, from performance consistency, network connectivity, abstraction, to management and API consistency – to make sure the migration journey doesn’t become more complicated just because customers want to make a bet on multiple cloud platforms.

A few years from now, the majority of CIOs will have to manage multiple clouds, service level agreements (SLAs), touch points and data flows across different platforms.

Any cloud provider that does a good job of alleviating these pain points by playing well with other cloud providers and eliminating roadblocks to creating a consistent experience across clouds, will be successful.

2. Migration considerations

As more legacy applications, complex workloads, and mission critical workloads start to move to the cloud, enterprises will need to think more about application and workload migration both in terms of timeline and choice. Over the last decade, we’ve seen that items moved to the cloud were either experimental or cloud native – legacy migration has not been a big challenge or concern for CIOs.

But as more mainstream and legacy apps move to the cloud, migration and onboarding will become a much larger consideration. This will be a key priority for businesses; it will be important for suppliers to be a true partner to customers and provide guidance on the transition.

3. The open source phenomenon

We will continue to see massive democratisation of technology as a whole – from hypervisors and virtualisation to cloud management tools and technology. The democratisation of the cloud will be led by open source technology, and will happen across the board from operating systems to the application stack.

The developer sphere will continue to drive this, and will become an even more important audience in 2019 and beyond.

Overall, in 2019, businesses will be worrying less about where workloads reside, and will be more focused on what business results they can drive around productivity and efficiency.

As the cloud landscape changes to accommodate these and other emerging trends and technologies, it’s important for them to choose a cloud provider that can change and grow as they do, developing a deep partnership that lasts.

Source

Hosting Python packages in Azure DevOps – Premier Developer

I started to build a solution based on the Microsoft’s recommended architecture on Modern Data Warehouse. It is going well, and most likely will be a topic for another blog post very soon.

In the early stages of this project, while building some transformation and analytics Python scripts in Databricks, I asked myself if I could build some custom python libraries and store them as private artifacts in the Azure DevOps Org for my organization. And then (this part is still uncertain) install them directly in a Databricks cluster.

So, I set out to try to solve at least the first part of this issue: creating and managing Python packages in Azure DevOps. And here it goes.

“A requirement of this self-imposed challenge is to build 100% of it in a Windows environment with tools used by Microsoft developers. This is a tough one. Python developers, with all their tools and documentation, perform much better in a Linux-based environment. But for the sake of this exercise, I went with a Windows-only setup. Let see how far I get!”

Setting up the Azure DevOps environment for the packages

Let’s get started and partially setup out Azure DevOps CI/CD. We will come back to it later to complete it. But for now, we need the git repo to be setup and configured.

– Go ahead and create a project for your Python packages. If the packages will be part of a larger project, than go ahead and select that project and head to the Repos section.

– In the Repos section, create a new git repo and enter these basic items. Namely, a repo name (in my case: py-packages) and a .gitignore file prefilled with Python-type entries.

db1

– Click on db2 in top-right of the screen, and go for db3

– Clone the repo in a local folder. Your VS 2017 clone repo window should look like this:

db4

– Chose the path carefully. Later we will talk about creating Python virtual environments, which will need their paths as well. In my local environment, these are created as follows:

  • Apps: C:_WORK_pythonapps….
  • Environments: C:_WORK_pythonenv….

– In Visual Studio 2017, make sure you have installed the Python development tools. (Open up Visual Studio Installer; click Modify; check the box in Python development section; and check the boxes in the Optional components as shown below.)

db5

– Once the Python development tools are created, go ahead and restart VS 2017 and create a Python application: File -> New -> Project -> Python Application. Note that “Create directory for solution” and “Create new Git repository” are unchecked. And, obviously, we are creating the app inside the folder we created when we cloned the repo from Azure DevOps.

db6

– A bit of optional housekeeping here.

  • Close VS 2017.
  • VS 2017 has created a folder called “PythonPackages” with a few files. We need these 2: PythonPackages.pyproj and PythonPackages.sln.
  • Cut and paste these two files in the folder above (py-packages)
  • Go ahead and delete the PythonPackages folder.
  • The folder structure of the local repo now looks like this:
    db7
  • Reopen VS 2017 by double-clicking on PythonPackages.sln

Setting up the python (local) environment for the packages

I would say that, when working with Python, having a Python virtual environment is a must. This helps in creating multiple environments, of different versions, for different application, which you can destroy when you are done with your development.

Otherwise, your local environment will become bloated with libraries and customizations and very soon will start creating problems for you. Also, you will not remember which library was installed and used for which application, so this is a very useful feature.

Here is a step by step guide for this part:

  • Identify a path for the environment. Something you can remember easily (in my case: C:_WORK_pythonenv3.6.5).
  • Open the Python solution by double-clicking on PythonPackages.sln.
  • Delete the orphan PythonPackages.py (with a yellow triangle next to it). That we deleted above but now we need to remove from the solution.
  • In solution explorer, under the PythonPackages app, select and right click the Python Environments sub tree. Chose the option Add virtual environment. Enter a path, as recommended above. It should look like this:
    db8
  • You have now a barebone Python solution and application; a virtual Python environment; and a git repository already hooked up with Azure DevOps. Let’s now install some libraries and add some code!

Installing additional Python packages

Go ahead and select the new Python virtual environment we just created in solution explorer. Right click and chose Install “Python Package”.

In the search books, search for and install the following packages:

  • pytest
  • wheel
  • twine

I will explain their usage throughout the text to follow.

The virtual environment has the packages it needs, and looks like this:

db9

Coding the Python packages

Packages in Python are any set of folders, and subfolders, that have an __init__.py file in it (each folder or subfolder needs to have an __init__.py file, even an empty one).

For example: Folder Pack1, can have an empty __init__.py file. It can also have a subfolder Sub1, with its own __init__.py file. We can then call functionality in Pack1 package and/or in Pack1.Sub1 package. Similarly, Pack1.Sub1 can also have a subfolder called SubSub1 with another __init__.py file and become its own sub-package.

As you can see the simplicity and flexibility of the whole process is phenomenal!

So, let’s go ahead and create a new package.

In solution explorer, right-click in the PythonPackage app; select Add; select New Item; and in the selection of items chose Python Package.

Chose a meaningful, but short (best a one-word), name for the top package. In my case I have selected: mdw (short for modern datawarehouse; as in the future I will add functionality used in the context of a modern datawarehouse application).

As you can see, the python package is just a folder with a __init__.py file in it. Select the mdw package; right click; select Add; select New Item and add a new Python package. Chose a name (no spaces) and short (in my case: databricks, as I intend to place some databricks related functionality in it).

Now you have a subpackage. Its functionality can be retrieved as mdw.databricks.<function>.

The app should now look like this:

db10

Now let’s add some meaningful code.

You can write you functions directly in the __init__.py file. However, I prefer to write them in separate files and group them by functionality. At this time, I will add some databricks functionality that will help me with its configuration.

Go ahead and create a new .py file under the databricks folder and call it databricks_config.py.

In the __init__.py add the following line of code: from .databricks_config import *

In the databricks_config.py file add the following:

#a very useless function

def hi():

return “Hello World!”

We have now, a package; a subpackage; and a useless function that doesn’t do anything useful except return the string “Hello world!”

Testing the Python packages

Before we start compiling this package and deploy it as a package, we should write some unit tests. This way we will now that the functions we write are working. Note that Python is an interpreted language, if we make any errors, we will only catch them when we use it. You should write tests for every function you write and run the tests before any new version of your package. It is the best way to make sure you are not packaging code that doesn’t work.

At the package level (mdw), add a new folder called tests. In that folder add an empty __init__.py file.

In the tests folder, add a python file called, test_ databricks_config.py. Note: it is very important to prefix all your test files with “test_”, it will be used by the CI framework to identify the unit test files it needs to run.

Inside the test_ databricks_config.py file add the following:

import pytest

import mdw.databricks as db

def test_hi():

s = db.hi()

assert isinstance(s, str)

This section of code does very little in terms of testing. It adds a reference to the pytest library that handles unit testing framework; it imports the subpackage, with the functionality we want; it calls the function and it inspects that the return value is o the type we want. For now, that is all we need.

Now let’s run the tests. Switch to Python Environment tab (next to Solution Explorer), and click in Open in Powershell.

In the powershell window type: python -m pytest ….appspy-packagesmdwtests. If you follow the file structure as described in this article this command will find the tests folder where the tests are for the mdw package and will run them all.

In our case, there is only 1 test, and that runs successfully. Add tests as you add functionality to the package.

Getting the Python package ready to be deployed

There are a few additional files we will need to add. At the root (application level) add the following files:

  • LICENSE.txt – This is a text file with the language about the terms of usage and type of license you want to provide. For public packages this is important. In our case we will host this package in a private repo, so this may not be necessary. But in case the package makes to the public, you want to place here language stating that it is prohibited to use this package without permission.
  • README.md – This is a markup file that will have the long description of the package; its functionality and how it is used. It will be bundled with the package.
  • MANIFEST.in – This is a file used by the packager to include or exclude files. Now create the file, and add the following to it:
    include README.md LICENSE.txt
  • setup.cfg – Another file used by the packager. Create the file and add the following as text:
    [metadata]
    license_files = LICENSE.txt

    [bdist_wheel]
    universal=1

  • .pypirc – This is an important file. This is a common file in Linux, but in Windows you cannot create (easily) a file with a starting dot. Follow instructions here to make it happen. Leave this file empty for now. We will get back at it.
  • requirements.txt – In this file we will add all the packages that needs to be installed prior to our package, or that our package is dependent on. Add the following:
    pip==18.1
    pytest==4.0.2
    wheel==0.32.3
    twine==1.12.1
    setuptools==40.6.3
  • setup.py – This is the file where the setup for the package creation goes here. A screenshot of the file is below. Copy and paste the text from the, Github link shared with this post. Obviously, replace the placeholder text on top of the file with your data.

db10

The final file structure of the project looks now like this:

db11

Let’s head now to Azure DevOps and create the Build& Release pipeline for this package, as well as the Artifacts feed where we will serve this package from.

Finalizing Azure DevOps pipeline and feed

Let’s start with the build.

Go to Azure DevOps, where we created our git repo for this package, and then Pipelines -> Builds -> New -> New build pipeline.

Select the code repo of the package as source and click Continue.

Search into the templates for Python Package and click Apply. There will be quite some settings to go through.

In the Triggers tab, make sure the Enable Continuous Integration is checked.

In the Variables section make sure python.version includes 3.6. You may want to remove any version this package is not compatible with or is not tested against. Let’s keep only 3.6, 3.7 for now.

Back in the Tasks page, chose a name for the build (avoid spaces in the name; they are always trouble), and in Agent Pool, choose Hosted Ubuntu 1604.

db12

In the Build & Test section:

  • Use Python – Keep the task as is and accepts all defaults.
  • Install dependencies – Make sure the following is in the Scripts window: python -m pip install –upgrade pip && pip install -r requirements.txt.
  • Flake – Is disabled. I don’t use it here. But is a personal choice. Turn it on if you prefer.
  • pytest: Makesuer this is in the scripts window: pip install pytest && pytest tests –doctest-modules –junitxml=junit/test-results.xml.
  • Publish Test Results – Leave as is and accepts all defaults.

In the Publish section:

  • Use Python – Keep the task as is and accepts all defaults.
  • Install dependencies: You need to add this task here. It is the same as the task in the Build & Test section.
  • Build sdist – Make sure the Script line looks like: python setup.py bdist_wheel.
  • Publish Artifact: dist – This publishes all the artifacts that will need to be used by the Release. Accept all defaults.
  • Publish Artifacts: pypirc – This is important. In addition to all the files, the Release pipeline needs the .pypirc file to probably publish the package. Right now, this file is empty, but we will add to it very soon. We will just add this file to the rest of the distribution. In the Path to publish add: .pypirc. Leave Artifacts name as dist.

As a last step, go to the .gitignore file and add the following:

.vs/

*.user

For some reason the .gitignore that gets generated by Azure DevOps does not add these lines, which generate some unnecessary files.

At this time, make the first check in of the code, and see the Build automatically start and go through all the steps. Troubleshoot any of the steps that may fail. If it succeeds, you can check the Tests tab to see the published version of all your tests results and click in the Artifacts button for the build to see what has the build produced and published. You will see the .pypirc file and another file with the extension *.whl. This is our package.

The build definition looks like this:

db13

Now let’s skip a step and go and create the feed we will be publishing our packages from. It is a straight forward process. Go to the Artifacts section of the project; click New feed; chose a worthy name and click Create. Here is how the screen should look:

db14

In the following screen click in the Connect to feed button. And make the Python choice. In the 2nd text section Upload packages with twine, click in the Generate Python credentials. Copy the text that is generated, and go ahead and add it to the .pypirc file in your local repo. Don’t check in the latest changes just yet.

Let’s go to the Pipelines -> Releases section and create a new Release definition. When the Templates list appears, chose Start with an empty job. There is no template for what we will do. But is a rather simple task!

Follow these steps:

  • Add an artifact: Click that and select the successful build we just created.
  • Continuous deployment trigger: Click and Enable the continuous deployment.
  • Stage name: Add a meaningful name to this stage.

In the Variables tab, go ahead and add a variable python.version and set it at: 3.6.5

And let’s go to the Task tab and add some tasks:

  • Agent job: Select Hosted Ubuntu 1604
  • Python Version: Add a Use Python Version task. In Version spec, add $(python.version)
  • Command line: Add a command line task. In Display name add: install dependencies. In Scripts window add: python -m pip install –upgrade pip && pip install twine.
  • Command line. Add a command line task. In Display name add: twine upload package. In Scripts window add: twine upload –config-file=”$(Build.DefinitionName)/dist/.pypirc” -r python-packages “$(Build.DefinitionName)/dist/*.whl” –skip-existing.

The package definition looks like this:

db15

This is the end of the process. What is left is to upgrade the version, to make it one increment higher (will need to do this every time you want to publish a new package). The .pypirc file has the address to our feed to where to send the package.

Check in the updated version and .pypirc and sync the repo with Azure DevOps. Watch the build compete and the release kick in and publish our package to the feed. Troubleshoot any issues an misconfiguration.

You have now a Python package you can distribute to anyone within your organization.

Final thoughts

What I wanted to accomplish here was simply to create a redistributable Python package. Also show some of the practices on how to work with Python in a Windows environment.

But I could not stop there.

I went ahead and built a complete CI/CD pipeline for the package, showing how easy it is and how well Python projects integrate with Azure DevOps.

Last, with Artifacts in Azure DevOps, you have a publishing hub for all such packages, being nuget, Python, Maven, npm, Gradle, or universal.

I hope you find this helpful.

Enjoy!

Source

Developers Eat Everything in a DevOps World

No doubt you have heard that Software is Eating the world and possibly even that Developers are the new Kingmakers.
Will if all this is true, then there is a lot of responsibility that new rests with developers (with people practicing DevOps, this is also self evident). Even before the rise of DevOps, developers were effectively entrusted with a lot of responsibility for the success of the business. Millions of dollars, or in some cases even lives, could be at stake.
Surely you have heard that with great power comes great responsibility? The flip side of this “new kingmaker” capability is that now (a lot) more is expected of developers.
I have heard many people lament the amount of setup required (and knowledge needed) to build a service these days. Kubernetes leads the charge here, but it isn’t limited to that. I even spent (mis-spent?) a Friday afternoon trying to get something akin to a “Hello World” happening on AWS Lambda (well a little more, and I was limiting myself to tools that Amazon publishes vs. third party tools, but still…). You can read more about that here. It was more work than I had hoped, but the end result I was very happy with (warning contains a video with some swearing in it).
This made me reflect as to why things “seem harder” for web app development, and why I needed to keep more things in my head.  This is just development now (especially for the cloud). There are just more things that as a developer you may come across.
This does not mean that there is more busy work to do – it is just that in the past this stuff was hidden when apps were “thrown over the wall” to Ops to deal with. This work always had to be done. Now things have “shifted left” and developers have to know or be deeply aware about it.

I went through various things I have had to think about over the years that (in my experience) many developers do not tend to think about (but may need to) and cooked up this list:

  • DNS
  • TLS/SSL termination (and perhaps be nerdy enough to cringe when people say “SSL”)
  • Network topography
  • Database connection management (per environment)
  • API gateways, network edges
  • CDNs
  • Latency
  • Rollback implications
  • Traffic shaping (when rolling out new versions of the apps)
  • On call schedules
  • Managing disk usage and IOPS
  • Data schema upgrades and migrating data

I am sure you could think of many things that should be on there that developers now need to be aware of. I say “aware of” as no one expects someone to be an expert in all these areas. I once misallocated a bunch of time to attempt to build my own DNS server (I learned that DNS uses the word “bailiwick” which I am sure was retired from the English language centuries ago).

Different platforms put different expectations on what a developer needs to be aware of. If I plotted where responsibility for things like the items listed above lies:

  • Traditional deployment puts a lot of responsibility on Ops specialists, this is quite well known (and lots has been written about these roles in the past, and their ongoing role in DevOps practices).
  • Serverless actually takes a lot of traditional ops concerns away, but does ask more of the implementer (a developer) to consider things around quotas, access rules, API gateways and more.
  • Kubernetes possibly asks the most. Whilst there are a growing set of very well supported cloud providers that run it “natively” (Google, Microsoft, Amazon) and do most of the heavy lifting for you, developers should not be surprised when new (or perhaps long forgotten) concepts come up as part of their usage of the platform. For now it seems like Kubernetes could be the right level of abstraction that allows a developer to make sensible decisions around things like DNS, without actually needing to configure them at a low level.

For example: a common enough problem is how to include a connection string to a database. Many developers will find “novel” solutions, and many dev frameworks will help, but the core of the issue is how to specify you want to access “tcp://your-database” and have it use the right data, in the right environment. In a platform like Kubernetes there are ways included that do this with DNS (after all that work setting things up), so one of the payoffs is you don’t think about this, you can’t really mess it up. You get to benefit from that forever from that point on. Each deployment doesn’t have to reinvent how to do that. When you move projects, you don’t have to reinvent it (even if you move companies perhaps – if something like Kubernetes is becoming an operating system).

As I mentioned before, none of this is really new; it is just now far more visible to developers. This isn’t busy work for the sake of busy work and stress, there are benefits: faster delivery, making more use of the platform (this means less re-inventing things by developers, ie less bespoke code). You could think of something like Kubernetes as having for a one time cost as we establish more standard, and easier to use, building blocks of the cloud (vs. every deployment pipeline being bespoke).

I like to think of these new platforms as having an “OS” to build on, and install things into. For Kubernetes, that allows really powerful tools to be installed like Istio, Kubeflow, Kubeless, Falco, Knative and more almost like you install it on a “normal” operating system (with package managers, upgrading and all that good stuff you don’t have to worry about). It would be near impossible to achieve this kind of productivity with traditional deploy targets in a “flip it over the wall” world – but with a cloud OS we can!

Also a final shout out to Jenkins X for making me think about all this. It is doing some of the heavy lifting for app developers so there isn’t too much burden of knowledge (let it create the pipelines and helm charts and deployments for you).

Source

Single node minikube cluster

Single node k8s cluster with Minikube

Minikube offers one of the easiest zero to dev experience to setup a single node kubernetes cluster. Its also the ideal way to create a local dev environment to test kubernetes code on.

This document explains how to setup and work with single node kubernetes cluster with minikube.

Install Minikube

Instructions to install minikube may vary based on the operating system and choice of the hypervisor. This is the official document which explains how to install minikube.

Start all in one single node cluster with minikube

<code>minikube <strong>status</strong>
</code>

[output]

<code><span class="hljs-label">minikube:</span>
<span class="hljs-label">cluster:</span>
<span class="hljs-label">kubectl:</span>
</code>
<code>minikube <span class="hljs-operator"><strong>start</strong>
</span></code>

[output]

<code>Starting <span class="hljs-built_in">local</span> Kubernetes v1<span class="hljs-built_in">.8</span><span class="hljs-built_in">.0</span> cluster<span class="hljs-attribute">...</span>
Starting VM<span class="hljs-attribute">...</span>
Getting VM IP address<span class="hljs-attribute">...</span>
Moving files <strong>into</strong> cluster<span class="hljs-attribute">...</span>
Setting up certs<span class="hljs-attribute">...</span>
Connecting <strong>to</strong> cluster<span class="hljs-attribute">...</span>
Setting up kubeconfig<span class="hljs-attribute">...</span>
Starting cluster components<span class="hljs-attribute">...</span>
Kubectl is now configured <strong>to</strong> use the cluster<span class="hljs-built_in">.
</span>Loading cached images from config file<span class="hljs-built_in">.
</span></code>
<code>minikube <strong>status</strong>
</code>

[output]

<code><span class="hljs-attribute">minikube</span>: <span class="hljs-string">Running</span>
<span class="hljs-attribute">cluster</span>: <span class="hljs-string">Running</span>
<span class="hljs-attribute">kubectl</span>: <span class="hljs-string">Correctly Configured: pointing to minikube-vm at 192.168.99.100</span>
</code>

Launch a kubernetes dashboard

<code><strong>minikube</strong> dashboard
</code>

Setting up docker environment

<code>minikube docker-env
<strong>export</strong> DOCKER_TLS_VERIFY=<span class="hljs-string">"1"</span>
<strong>export</strong> DOCKER_HOST=<span class="hljs-string">"tcp://192.168.99.100:2376"</span>
<strong>export</strong> DOCKER_CERT_PATH=<span class="hljs-string">"/Users/gouravshah/.minikube/certs"</span>
<strong>export</strong> DOCKER_API_VERSION=<span class="hljs-string">"1.23"</span>
# Run <strong>this</strong> command to configure your shell:
# <span class="hljs-built_in">eval</span> $(minikube docker-env)
</code>

Run the command given above, e.g.

<code><strong>eval</strong> <span class="hljs-variable">$(</span>minikube docker-<strong>env</strong>)
</code>

Now your docker client should be able to connect with the minikube cluster

<code>docker <strong>ps</strong>
</code>

Additional Commands

<code>minikube ip
minikube <strong>get</strong>-k8s-versions
minikube logs
</code>

Source

How to set up a Continuous Integration and Delivery (CI/CD) Pipeline

CI_CD_Pipeline_HERO-ART_SM

In this project, you will learn how to set up a continuous integration and continuous delivery (CI/CD) pipeline on AWS. A pipeline helps you automate steps in your software delivery process, such as initiating automatic builds and then deploying to Amazon EC2 instances. You will use AWS CodePipeline, a service that builds, tests, and deploys your code every time there is a code change, based on the release process models you define. Use CodePipeline to orchestrate each step in your release process. As part of your setup, you will plug other AWS services into CodePipeline to complete your software delivery pipeline. This guide will show you how to create a very simple pipeline that pulls code from a source repository and automatically deploys it to an Amazon EC2 instance.

setup-cicd-pipeline2

Create a release pipeline that automates your software delivery process using AWS CodePipeline.

Connect a source repository, such as AWS CodeCommit, Amazon S3, or GitHub, to your pipeline.

Automate code deployments by connecting your pipeline to AWS CodeDeploy, a service that deploys code changes committed to your source repository to Amazon EC2 instances.

(Optional) Plug in a build service such as Jenkins when you complete the Four-Stage Pipeline Tutorial.

What you’ll need before starting:

An AWS Account: You will need an AWS account to begin setting up your continuous integration and continuous delivery pipeline. Sign up for AWS.

AWS Experience: Intermediate familiarity with AWS and its services is recommended.

AWS permissions: Before you build your CI/CD pipeline with CodePipeline, you may need to set up AWS IAM permissions to start building. Click here for step-by-step instructions.

Monthly Billing Estimate:

The total cost of running a CI/CD pipeline on AWS depends on the AWS services used in your pipeline. For example, AWS CodePipeline, AWS CodeCommit, Amazon S3, and Amazon EC2 are all AWS services that you can use to build your pipeline – and each product has a different pricing model that impacts your monthly bill. Monthly charges will vary on your configuration and usage of each product, but if you follow the step-by-step instructions in this guide and accept the default configurations, you can expect to be billed around $15 per month. Most of this cost is from leaving the EC2 instance running. To see a detailed breakdown, see Services Used and Costs.

Set up a continuous integration and continuous delivery (CI/CD) pipeline on AWS with the help of industry-leading tools and experts.

Learn more about continuous delivery and how it can improve your software development process.

Need more resources to get started with AWS? Visit the Getting Started Resource Center to find tutorials, projects and videos to get started with AWS.

Learn more about the flexible services designed to enable companies to more rapidly and reliably build and deliver products using AWS and DevOps practices.

Source

Hybrid Cloud, IoT, Blockchain, AI/ML, Containers, and DevOps… Oh My!

When it rains it pours. It seems regarding Enterprise IT technology innovation, it is common for multiple game-changing innovations to hit the street simultaneously. Yet, if ever the analogy of painting the car while its traveling down the highway is suitable, it’s this time. Certainly, you can take a wait and see approach with regard to adoption, but given the association of these innovations toward greater business agility, you’d run the risk of falling behind your competitors.

Let’s take a look at what each of these innovations mean for the enterprise and their associated impact to the business.

First, let’s explore the synergies of some of these innovations. Certainly, each innovation can and does have a certain value by themselves, however, when grouped they can provide powerful solutions to help drive growth and new business models.

  • Hybrid Cloud + IoT + AI/ML. IoT produces a lot of exhaust (data) that results in two primary outcomes: a) immediate analysis resulting in a directive to the IoT endpoint (the basis for many smartX initiatives) or b) collect and analyze looking for patterns. Either way, the public cloud going to offer the most economic solution for IoT services, data storage and the compute and services supporting machine learning algorithms.
  • IoT + Blockchain. Blockchains provide immutable entries stored in a distributed ledger. When combined with machine-driven entries, for example from an IoT sensor, we have non-refutable evidence. This is great for tracing chain of custody, not just law enforcement, but perishables, such as meat and plants.
  • Containers, DevOps and agile software development. These form the basis for delivering solutions like those above quickly and economically bringing allowing the value to be realized rapidly by the business.

There are businesses that are already using these technologies to deliver new and innovative solutions, many of which have been promoted in the press and at conferences. While these stories illustrate strong forward momentum, they also tend to foster a belief that these innovations have reached a sufficient level of maturity, such that the solution is not susceptible to lack of availability. This is far from the case. Indeed, these innovations are far from mainstream.

Let’s explore what adoption means to IT and the business for these various innovations.

Hybrid Cloud

I specifically chose hybrid cloud versus public cloud because it represents an even greater amount of complexity to enterprise IT than public cloud alone. It requires collaboration and integration between organizations and departments that have a common goal but very different approaches to achieving success.

First, cloud is about managing and delivering software services, whereas the data center is charged with delivering both infrastructure and software services. However, the complexity and overhead of managing and delivering reliable and available infrastructure overshadows the complexity of software services, resulting in the latter often receiving far less attention in most self-managed environments. When the complexity surrounding delivery of infrastructure is removed, the operations team can focus solely on delivery and consumption of software services.

Security is always an issue, but the maturation process surrounding delivery of cloud services by the top cloud service providers means that it is a constantly changing environment. With security in the cloud, there is no room for error or the applications could be compromised. This, in turn, requires that after each update to the security controls around a service the cloud team (architects, developers, operations, etc.) must educate themselves on the implications of the change and then assess how that change may affect their production environments. Any misunderstanding of these updates and the environment could become vulnerable.

Hybrid cloud also often means that the team must retain traditional data center skills while also adding skills related to the cloud service provider(s) of choice. This is an often overlooked aspect of assessing cloud costs. Moreover, highly-skilled cloud personnel are still difficult to attract and usually demand higher than market salaries. You could (and should) upskill your own staff, but you will want a few experts as part of the team on-the-job training for public cloud, as unsecured public cloud may lead to compromising situations for businesses.

Internet-of-Things (IoT)

The issue with IoT is that it is not one single thing, but a complex network of physical and mechanical components. In a world that has been moving to a high degree of virtualization, IoT represents a marked shift back toward data center skills with an emphasis on device configurations, disconnected states, limitations on size of data packets being exchanged, and low-memory code footprints. Anyone who was around during the early days of networking DOS PC’s will be able to relate to some of the constraints.

As with all things digital, security is a highly-complex topic with regard to IoT. There are so many layers within an IoT solution that welcomes compromise: the sensor, the network, the edge, the data endpoint, etc. As many of the devices participating in an IoT network may be resource constrained there’s only so much overhead that can be introduced for security before it impairs the purpose.

For many, however, when you say IoT they immediately only see the analytical aspects associated with all the data collected from the myriad of devices. Sure, analyzing the data obtained from the sensor mesh and the edge devices can yield an understanding of the way things worked in ways that were extremely difficult with the coarse-grained telemetry provided by these devices. For example, a manufacturing device that signaled issues with a low hum prior to the use of sensors that now reveal that in tandem with the hum, there’s also a rise in temperature and an increase in vibration. With a few short months of collecting data, there’s no need to even wait for the hum, the data will indicate the beginning of a problem.

Of course, the value discussed in the prior paragraph can only be expressed if you have the right skilled individuals across the entire information chain. Those able to modify or configure endpoint devices to participate in an IoT scenario, the cybersecurity and infosec experts to limit potential issues due to breach or misuse, and the data scientists capable of making sense of the volumes of data being collected. Of course, if you haven’t selected the public cloud as the endpoint for your data, you also then have the additional overhead of managing network connectivity and storage capacity management associated with rapidly growing volumes of data.

Artificial Intelligence and Machine Learning (AI/ML)

If you can harness the power of machine learning and AI you gain insights into your business and industry in a way that was very difficult up until recently. While this is seemingly a simple statement, that one word “harness” is loaded with complexity. First, these technologies are most successful when operating against massive quantities of data.

The more data you have the more accurate the outcomes. This means that it is incumbent upon the business to a) find, aggregate, cleanse and store the data to support the effort, b) formulate a hypothesis, c) evaluate the output of multiple algorithms to determine which will best support the outcome you are seeking—e.g. predictive, trends, etc.—and d) create a model. This all equates to a lot of legs to get the job done. Once your model is complete and your hypothesis proven, the machine will do most of the work from there on out but getting there requires a lot of human knowledge engineering effort.

A point of caution, make business decisions using the outcome of your AI/ML models when you have not followed every one of these steps and then qualified the outcome of the model against the real world at least two times.

Blockchain

Touted as the technology that will “change the world,” yet outside of cryptocurrencies, blockchain is still trying to establish firm roots within the business world. There are many issues with blockchain adoption at the moment, the most prevalent one is velocity of change. There is no single standard blockchain technology.

There are multiple technologies each attempting to provide the foundation for trusted and validated transactional exchange without requiring a centralized party. Buying into a particular technology at this point in the maturity curve, will provide insight into the value of blockchain, but will require constant care and feeding as well as the potential need to migrate to a completely different network foundation at some point in the future. Hence, don’t bet the farm on the approach you choose today.

Additionally, there are still many outstanding non-technical issues that blockchain value is dependent upon, such as the legality of blockchain entries as a form of non-repudiation. That is, can a blockchain be used as evidence in a legal case to demonstrate intent and validation of agreed upon actions? There are also issues related to what effect use of a blockchain may have on various partnering contracts and credit agreements, especially for global companies with GDPR requirements.

Finally, is the value of the blockchain a large enough network to enforce consensus? Who should host these nodes? Are the public networks sufficient for business or is there a need for a private network shared among a community with common needs?

Containers, DevOps, & Agile SDLC

I’ve lumped these three innovation together because unlike the others, they are more technological in nature and carry elements of the “how” more so than the “what”. Still, there is a significant amount of attention being paid to these three topics that extend far outside the IT organization due to their association with enabling businesses to become more agile. To wit, I add my general disclaimer and word of caution, the technology is only an enabler, it’s what you do with it that might be valuable or may have an opposite effect.

Containers should be the least impactful of these three topics, as it’s simply another way to use compute resources. Containers are smaller and more lightweight than virtual machines but still facilitate a level of isolation between what is running in the container and what is running outside the container. The complexity arises from moving processes from bare metal and virtual machines into containers as containers leverage machine resources differently than the aforementioned platforms.

While it’s fairly simple to create a container, getting a group of containers to work together reliably can be fraught with challenges. This is why container management systems have become more and more complex over time. With the addition of Kubernetes, businesses effectively needs the knowledge of data center operations in a single team. Of course, public cloud service providers now offer managed container management systems that reduce the requirements on such a broad set of knowledge, but it’s still incumbent on operations to know how to configure and organize containers from a performance and security perspective.

DevOps and Agile Software Development Lifecycle (SDLC) really force the internal engineering teams to think and act differently if they are transitioning from traditional waterfall development practices. Many businesses have taken the first step of this transition by starting to adopt some Agile SDLC practices. However, because of the need for retraining, hiring, and support of this effort, the interim state many of these businesses are in have been called “wagile” meaning some combination of waterfall and agile.

As for DevOps, the metrics have been published regarding the business value of becoming a high-performing software delivery and operations organization. In this age of “software is eating the world” can your organization ignore DevOps and if not ignore take years to transition? You will hear stories from businesses that have adopted DevOps and Agile SDLC and made great strides in reducing latency, increasing the number of releases they can make in a given time period, and deploying new capabilities and functions to production at a much faster rate with fewer change failures. Many of these stories are real, but even in these businesses, you will still find pockets where there is no adoption and they still follow a waterfall SDLC that take ten months to get a single release into production.

Conclusion

Individually, each of these innovations requires trained resources, funding, and can be difficult to move beyond proof-of-concept to completely operationalized production outcomes. Taken in combination, on top of existing operational pressures, these innovations can rapidly overwhelm even the most adept enterprise IT organization. Even in cases where there is multi-modal IT and these innovations are occurring outside the path of traditional IT, existing IT knowledge and experience will be required to support. For example, if you want to analyze purchasing trends for the past five years, you will need to support of the teams responsible for your financial systems.

All this leads to the really big question, how should businesses go about absorbing these innovations? The pragmatic answer is of course introduce those innovations related to a specific business outcome. However, as stated, waiting to introduce some of these innovations could result in losing ground to competition. This means that you may want to introduce some proof-of-concept projects especially around AI/ML and Agile SDLC with IoT and Blockchain projects where they make sense for your business.

Source

Top Stories from the Microsoft DevOps Community – 2019.01.11 – Microsoft DevOps Blog

Welcome back Microsoft developers and DevOps practitioners; I hope you had a great new year! Me? I took some time off to recharge the batteries and I’m glad I did because — wow — even though it’s just the beginning of 2019, there’s already some incredible news coming out of the DevOps community.

Alexa, open Azure DevOps
This. Is. Incredible. Mike Kaufmann demonstrates the MVP of integration between Alexa and Azure DevOps. Do you want to assign a work item to him? Just ask. You’ve got to watch this video – once I did, I realized that I wanted an Alexa.

TFS 2019, Change Work Item Type and Move Between Team Project
We recently brought the ability to change work item types and to move work items between projects to the on-premises version of Azure DevOps Server. But there’s a caveat – you can’t have Reporting Services enabled. Ricci Gian Maria walks through this limitation and the solution.

Deploying to Kubernetes with Azure DevOps: A first pass
Kubernetes is incredibly popular, as it’s the next generation deployment platform for containerized application. But how do you build out a deployment pipeline around it? Jason Farrell creates his first pipeline to build a container and deploy it into AKS (Azure Kubernetes Service).

Creating a git repo with Azure Repos and trying out Git LFS
If you’re thinking about using Git in a project with large binary assets – like images, videos or audio files – you might find yourself disappointed, as Git struggles with large binaries. Andrew Lock explains why, and how you can use Git LFS (Large File Storage) to manage your project.

How the Azure DevOps teams plan with Aaron Bjork
Donovan Brown interviews Aaron Bjork about the way the Azure DevOps team has historically planned our agile processes and how we’ve adapted and changed our high-level planning and adopting Objectives and Key Results (OKRs).

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

Source