IBM and Red Hat: Now, That’s an Interesting Combination

Like everyone else, Sunday night I got a surprise with the pre-announcement of IBM’s intent to acquire Red Hat (I might have seen it a little earlier than the rest of you, as a friend from IBM shared it as soon as it was public). I’ve been mulling this over and checking out the information being provided thus far, and have some early thoughts:

  • No matter what, some people aren’t going to like this.
  • Very much like Microsoft purchasing GitHub, you’ll see negative knee-jerk reactions.
  • Early indicators are that it is good for everyone.

Of course, since the acquisition isn’t expected to close for about a year, all we have is early indications, but they do seem positive.

Fear for Open Source

IBM goes a long way to assuaging the instant fear that many will have. From the press release to the investor relations call, the company makes it very clear that Red Hat will operate on its own. IBM stops short of saying, “wholly owned subsidiary,” but is keeping all that makes Red Hat, well, Red Hat. That will likely change over time—it does for any acquisition—but for the near future, Red Hat’s strategies and values will continue. So those people who will inevitably equate this to the death of open source can be safely shushed or ignored.

Customer Concerns

Red Hat knows that there is an open source equivalent to everything it does, and as such it treats those willing to fork over money either for support or more polished products as if they could walk away tomorrow. There is nothing in any of the released documentation to suggest this will change. In fact, “keeping Red Hat’s strategy” specifically was mentioned to investors, and this world view informs all of their strategy. One Red Hat person I worked with years ago put it: “We need to give them a reason for paying us when they don’t have to. That’s where support and service come in.” In the short term, that does not appear to be changing.

Cloud Boom!

IBM is right, the future of cloud is in two narratives: one of cloud portability and another of on-premises for stuff that just isn’t that portable. IBM plus Red Hat is in a unique position to serve those two markets and offer up DevOps toolsets based on standards that will spin up whatever, wherever. Between them, they work with every major public cloud vendor, support containers and have fingers deep into the private cloud space. If their predictions that workloads will continue to move toward on-demand is accurate, it will be difficult to find a better solution set than the two can offer together.

So Overall, it Looks Good

I was in discussions on several social media platforms over the course of Sunday evening about this, and the consensus seems to be that it could be far worse. If Red Hat were for sale, think through all the companies that would see benefit in owning Red Hat, and could afford the price tag IBM paid. There are not a lot that could be counted on to “Let Red Hat be Red Hat.” Over the long term, IBM can’t either, but that is simply business. In the short term, it is reasonable to expect that not much will change except Red Hat customers get pitched more IBM software and gear, while IBM offers Red Hat standard with their other offerings. Other potential purchasers would likely completely fold Red Hat into their business, thereby destroying what makes the company unique. So if Red Hat was going to be sold, the face the two companies are putting on it is the best we could hope for.

Ops Side Love

Red Hat has done a good job of integrating containers as they came along, and Ansible provides decent configuration support for a variety of deployment targets. Merging that with IBM’s cloud work should offer a streamlined DevOps pipeline that can handle whatever target you want it to configure. Oh, certainly there will be speed bumps along the way—we do live in a time of change, which brings them along—but portability plus programmed configurability will (as IBM has hinted) equate to prequalified stacks that can then be dropped into deployment toolchains.

Not too Long to Wait

While the official merger will not be consummated until late next year, expect to see the two organizations use their longstanding partnership to begin exploring cross-company cooperation before that. We should see the first fruits of this arrangement relatively quickly, and unless Red Hat’s investors find some reason to vote down a HUGE raise of their per-share stock value, expect that the cooperation will continue to ramp up until the acquisition is completed.

If Red Hat Stays the Same…

Given that IBM has said Red Hat would stay the same for the foreseeable future, that means all you are doing with the company can keep going, and you can pick and choose which of the partnership options that are bound to come along pre-completion you wish to take advantage of. That’s a benefit to the customer. So grab what you like and keep making your DevOps environment better.

Source

Raj Fowler explains Unlocking the DevOps psychology

 

DevOpsGroup discussing a project

Unlocking the DevOps psychology: What does the digital mindset look like?

A big part of DevOps is challenging traditional, established software development approaches that have dominated the industry for decades. But convincing organisations and IT leaders that there are better alternatives out there is far from an easy task.

Regardless of industry, the world is used to an authoritative and linear way of working. Managers make all the key decisions, with little input from anyone else within the organisation. Meanwhile, employees are slammed with a never-ending stream of work to complete by harsh deadlines. Overall, there’s limited communication between teams, and silos appear.

In the technology sector, DevOps is looking to change things by breaking down the wall between development and operations teams – while connecting all the key stakeholders involved in the application lifestyle. Ultimately, the aim is to transform and accelerate the way companies create, roll out, and maintain software. Benefits include shorter development cycles, fast feedback loops, increased deployments, improved visibility, and collaboration.

Although frameworks, best practices, and automated tools play an important role in implementing and scaling a DevOps transition, thinking differently is perhaps more critical. Whether it’s at the leadership or employee level, psychology underpins DevOps. In this post, we’ll show how you can unlock the digital mindset and bring your organisation forward in today’s interconnected world.

The bigger picture

Raj Fowler, Principal DevOps Consultant, believes that organisations and leaders are struggling to respond to change due to societal perceptions of career and business success.

“Early on, we’re trained and conditioned by our education and establishments to follow a very minimalistic, reductionist, and siloed view of the world. Everyone is designed to be good at one particular thing,” he says.

To Fowler, this way of thinking leads to failure. By focusing on the finer details, you’re effectively missing the bigger picture. He continues: “If you’ve been trained to fix wheels on a car, that’s all you’re good at. But whether or not the whole car is in working order depends on the system.

“That idea originally came from Frederick Taylor, who advocated building local efficiency to get overall greatness. However, this brings a different psychology to bear, where it’s not just about the local efficiencies of the things they’re looking after, but the overall goal of the business.”

In the Phoenix Project, veteran business executive Eric gets CIO Bill to visualise the entire system of a factory and to incorporate this into his own organisation. This demonstrates the importance of viewing things from a high level.

Raj continues: “Ed told Bill to look at the whole and that local optimisation doesn’t mean that the overall becomes effective. That’s where Gene Kim talks about the three ways, the first of which builds on the arguments about Total Systems Thinking and Total Quality Management. The end result is that mindset is more focused on the whole.”

Looking ahead

Like the old saying goes, two heads are better than one. Everyone involved in the digital transformation journey should work collaboratively if they’re to accelerate innovation and create value for the organisation. “From a management perspective, having two people working on one particular thing is inefficient. But, actually, the overall effectiveness and quality of the product is better because two people have collaborated and jointly conducted quality control,” explains Raj.

Leadership, Raj admits, is increasing in complexity as the world continues to advance and become more connected. “To me, DevOps is about no longer thinking in your local silos and local efficiencies, but instead, looking at the overall picture of the business. But the problem is that our world is always evolving.”

“A hundred years ago, the number of variables organisations and teams had to manage were very few and the rate of change was relatively slow. There wasn’t digital disruption, or small organisations taking on the establishment. Today, we have many more variables and things are constantly changing. Unfortunately, from experience, I’ve seen leaders try to control every single one of them – and their mindset becomes about working harder rather than smarter.”

Modern leaders

In the DevOps environment, leaders must to trust the people closer to the problems if they’re to be solved. “The high-performance mindset around digital transformation is concerned with integrations, not the details of the work. Giving smaller groups of people more accountability, having greater alignment with the goal and vision, and building a culture of trust are critical to succeeding,” recommends Raj.

“Whether it’s an engineer or a CEO, everyone needs psychological safety and to feel like failure isn’t a bad thing. Teams should trust each other, even if they may sometimes feel let down. Everyone must have time to be creative and experiment. In the past, leaders would try to control everything. Now, there needs to be more flexibility and willingness to try new things.”

Raj takes the view that modern leaders should create environments where teams can be candid and learn from each other. “Gardeners do not force plants to grow; they create the right environment for growth (Team of Teams, McChrystal). In my opinion, the new leadership and DevOps psychology is about creating the right environment for people and teams to grow themselves, and be the best they can be,” he says.

“Everybody should be encouraged to speak up and be challenged to debate. In the past, this would have been seen as challenging authority. But in reality, whether you’re the CEO of the company or an engineer, active debate stimulates understanding and helps the business to keep moving in the right direction.”

When it comes to embarking on a DevOps transformation journey, there’s no denying the important role that software and methodologies play. But it’s clear that you can only achieve success here if you’re in the right frame-of-mind. Leadership in the digital age is all about encouraging collaboration and innovation, rather than simply being a boss. And all members of the team should be involved in the process.

Source

AI and Machine Learning from GigaSpaces InsightEdge harnessed by Magic Software’s SaaS integration platform

Fast Streaming and Advanced Analytics Provide Magic Customers Real Time Insights to Action Required for Increased Operational Efficiency, Competitiveness and Innovation

New York, NY — October 31, 2018 — GigaSpaces, the provider of InsightEdge, a leading in-memory real-time analytics platform for instant insights to action announced today that InsightEdge has been selected by Magic Software Enterprises to power their Magic xpi end-to-end integration platform. This integration will enable companies to make faster and smarter data-driven decisions to boost revenues, reduce costs, mitigate risks, and outperform competitors.

Providing the free flow of data between leading ERP, CRM, finance, MES and other systems, Magic xpi now leverages InsightEdge which unifies real-time analytics and AI to achieve lean manufacturing, perform predictive maintenance and automate operational workflows. Machine learning models run with sub-second latency on hot data as it’s born, while being enriched with historical context from data lakes, resulting in accurate real time actionable insights for improved decision making.

“InsightEdge enables our customers to generate insights for C-level executives and line of business management and optimizes automated processes from an unprecedented amount of data generated by sensors and GPS readings from machines, products, employees, as well as inputs from shop floor apps, and back office systems,” said Yuval Lavi, VP Product Innovation at Magic Software. “The combination of integrated data from multiple sources, extreme data processing and real-time analytics provides insights that impact leaner operations and an improved customer experience.”

Magic xpi running with InsightEdge enables several data driven processes, for example, using sensors to monitor equipment to predict breakdowns, performing predictive analytics to determine which and how many quality tests should be performed, and sharing supplier production data with partners and customers to identify delivery delays and adjust processes accordingly.

McKinsey has noted that predictive maintenance initiatives can show a 10% reduction in annual maintenance costs and a 20% reduction in downtime with a 25% reduction in inspection costs for AI-driven predictive maintenance models.

This is not the first collaboration between Magic and GigaSpaces. Magic has been using GigaSpaces’ XAP In-memory computing platform for years to deliver fast data streaming, aggregation and calculations, and last year announced an InsightEdge integration leveraging InsightEdge as an IoT Hub. Magic xpi customers will now have an option to run InsightEdge to experience the benefits of real time analytics.

“Data integration combined with real-time advanced analytics is needed to fuel the factory of the future,” said Yoav Einav, VP of Products for GigaSpaces. “With the incorporation of InsightEdge capabilities into Magic xpi, we are bringing the power of machine learning to the shop floor and the back office to help companies optimize processes to maximize efficiencies and exceed their revenue goals.”

GigaSpaces and Magic Software are presenting “The Insight-Driven Organization: Leverage AI to Transform Your Data into Revenue” on November 6th at the Design Offices in Munich Germany, starting at 8:30AM. The event will include case studies and discussions about market challenges, trends and real-world best practices for enterprises to innovate with confidence and become insight-driven. For more information on the event click here.

About GigaSpaces
GigaSpaces provides leading in-memory computing platforms for real-time insight to action and extreme transactional processing. With GigaSpaces, enterprises can operationalize machine learning and transactional processing to gain real-time insights on their data and act upon them in the moment. The always-on platforms for mission-critical applications across cloud, on-premise or hybrid, are leveraged by hundreds of Tier-1 and Fortune-listed organizations worldwide across financial services, retail, transportation, telecom, healthcare, and more. GigaSpaces offices are located in the US, Europe and Asia.

Source

Register Now for “DevSecOps: The Open Source Way” Session | @DevOpsSUMMIT @RedHat @GHaff #DevOps #DevSecOps #Microservices

DevSecOps: The Open Source Way

Register for this session ▸ Here

DevOps purists may chafe at the DevSecOps term given that security and other important practices are supposed to already be an integral part of routine DevOps workflows. But the reality is that security often gets more lip service than thoughtful and systematic integration into open source software sourcing, development pipelines, and operations processes–in spite of an increasing number of threats.

The extensive use of modular open source software from third-parties, distributed development teams, and rapid iterative releases require a commitment to security and the adoption of security approaches that are continuous, adaptive, and heavily automated.

In this session, Red Hat Technology Evangelist Gordon Haff look at successful practices that distributed and diverse teams use to iterate rapidly. While still reacting quickly to threats and minimizing business risk. I’ll discuss how a container platform can serve as the foundation for DevSecOps in your organization. I’ll also consider the risk management associated with integrating components from a variety of sources–a consideration that open source software has had to deal with since the beginning. Finally, I’ll show ways by which automation and repeatable trusted delivery of code can be built directly into a DevOps pipeline.

Speaker Bio
Gordon Haff is senior cloud strategy marketing and evangelism manager at Red Hat. Prior to Red Hat, Gordon wrote hundreds of research notes and was frequently quoted in publications like The New York Times on a wide range of IT topics, as well as advising clients on product and marketing strategies. He also has many years of hands-on experience with both IT software and hardware.

CloudEXPO | DevOpsSUMMIT | DXWorldEXPO 2018 New York will be held November 12-13, 2018 in New York City.

Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.

A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.

Our Top 200 Digital Transformation Sponsors

Last year, 43 sponsors, 104 exhibitors, and over 200 DX companies participated at CloudEXPO | DevOpsSUMMIT | DXWorldEXPO, more than any other similar event in the world. This year we launched our conference conference website and conference schedule on May 12, 2018, exactly six months before our November 12-13 New York event dates.

View our Top 200 Digital Transformation Sponsors ▸ Here

Register for the Conference Here

We are offering early bird savings on all ticket types where you can save significant amount of money by purchasing your conference tickets today.

Speaking Opportunities Here

This year we are presenting a MEGA faculty of 222 rockstar speakers. Submit your speaking proposal which will be instantly shared with conference advisory board. All accepted proposals are confirmed within six hours of paper submission.

Sponsorship Opportunities Here

Contact us with you sponsorship and exhibit inquiry. We will answer all your questions and work with you to identify the best option based on your goals.

Sponsor List (updated daily) Here

Exhibitor List (updated daily) ▸ Here

2018 Conference Agenda, Our MEGA Faculty of 222 Rockstar Speakers

View our conference schedule (updated daily) ▸ Here

View our speaker lineup (updated daily)Here

DXWordEXPO New York 2018, DevOpsSUMMIT New York 2018 and CloudEXPO New York 2018 agenda will present 222 rockstar faculty members, 200 sessions and 22 keynotes and general sessions in 10 distinct conference tracks.

  • Cloud-Native | Serverless
  • DevOpsSummit
  • FinTechEXPO – New York Blockchain Event
  • CloudEXPO – Enterprise Cloud
  • DXWorldEXPO – Digital Transformation (DX)
  • Smart Cities | IoT | IIoT
  • AI | Machine Learning | Cognitive Computing
  • BigData | Analytics
  • The API Enterprise | Mobility | Security
  • Hot Topics

CloudEXPO | DevOpsSUMMIT | DXWorldEXPO 2018 New York cover all of these tools, with the most comprehensive program and with 222 rockstar speakers throughout our industry presenting 22 Keynotes and General Sessions, 200 Breakout Sessions along 10 Tracks, as well as our signature Power Panels. Our Expo Floor brings together the world’s leading companies throughout the world of Cloud Computing, DevOps, FinTech, Digital Transformation, and all they entail.

As your enterprise creates a vision and strategy that enables you to create your own unique, long-term success, learning about all the technologies involved is essential. Companies today not only form multi-cloud and hybrid cloud architectures, but create them with built-in cognitive capabilities.

Cloud-Native thinking is now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector.

CloudEXPO is the world’s most influential technology event where Cloud Computing was coined over a decade ago and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals.

Register for the Conference Here

Speaking Opportunities Here

Sponsorship Opportunities Here

Sponsor List (updated daily) Here

Exhibitor List (updated daily) ▸ Here

Conference schedule (updated daily) Here

Speaker lineup (updated daily) Here

FinTech Is Now Part of the DXWorldEXPO | CloudEXPO Program

Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expensive intermediate processes from their businesses.

Accordingly, attendees at the upcoming 22nd CloudEXPO | DXWorldEXPO November 12-13, 2018 in New York City will find fresh new content in two new tracks called:

  • FinTechEXPO
  • New York Blockchain Event

which will incorporate FinTech and Blockchain, as well as machine learning, artificial intelligence and deep learning in these two distinct tracks.

FinTech brings efficiency as well as the ability to deliver new services and a much improved customer experience throughout the global financial services industry. FinTech is a natural fit with cloud computing, as new services are quickly developed, deployed, and scaled on public, private, and hybrid clouds.

More than US$20 billion in venture capital is being invested in FinTech this year. We’re pleased to bring you the latest FinTech developments as an integral part of our program.

Sponsorship Opportunities

CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are together the single event where technology buyers and vendors meet to experience and discus cloud computing and all that it entails. For more than a decade, sponsors and exhibitors of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO benefit from unmatched branding, profile building and lead generation opportunities through:

  • Featured on-site presentation and ongoing on-demand webcast exposure to a captive audience of industry decision-makers
  • Showcase exhibition during our new extended dedicated expo hours
  • Breakout Session Priority scheduling for Sponsors that have been guaranteed a 40-minute technical session
  • Online advertising on 4,5 million article pages in SYS-CON’s leading i-Technology Publications
  • Capitalize on our Comprehensive Marketing efforts leading up to the show with print mailings, e-newsletters and extensive online media coverage
  • Unprecedented PR Coverage: Unmatched editorial coverage on Cloud Computing Journal
  • Tweetup to over 127,000 plus Twitter followers
  • Press releases sent on major wire services to over 500 industry analysts

Secrets of Our Most Popular Sponsors and ExhibitorsHere

Secrets of Our Most Popular Speakers Here

For more information on sponsorship, exhibit, and keynote opportunities call us at 954 242-0444 or contact us ▸ Here

Speaking Opportunities Open

The upcoming 22nd International CloudEXPO | DevOpsSUMMIT | DXWorldEXPO, November 12-13, 2018 in New York City, NY announces that its Call For Papers for speaking opportunities is now open. We reply to all accepted speaking proposals within six hours.

About DXWorldEXPO LLC

DXWorldEXPO LLC is a Lighthouse Point, Florida-based trade show company and the creator of DXWorldEXPO – Digital Transformation Conference & Expo. The company produces and presents the world’s most influential technology events including CloudEXPO, DevOpsSUMMIT, FinTechEXPO – Blockchain Event.

Source

Let’s Bury Bimodal Thinking in Enterprise IT

Bimodal Thinking in Enterprise IT

We still hear a lot about mode-1 and mode-2 in enterprise IT. Like many other large enterprises, you might have adopted the terminology introduced by Gartner and use it every day to shape the way you deliver and operate technology. However, I have always been uncomfortable with the idea of bimodal, as it ignores architectural reality of tightly coupled systems that we have today and the cultural factors. There is also the cultural dimension that is emerging a lot in the transformations that I help clients with.

The cultural problem with Gartner’s bimodal IT view is that it relegates legacy systems such as mainframe (Z series) or I-series to mode 1 and discourages the new generation of professionals from working on these platforms. If you are in mode 2, you get access to the latest tools and training opportunities, and if you are not, then you are relegated to the end of the queue!

What do you think? Is bimodal creating two classes of people? Or am I being too harsh in my judgment?

In a recent article by BMW CIO Klaus Straub, he went on record and said bimodal IT doesn’t work due to cultural issues. As he said in the article, “Until September of last year, I also had a bimodal world in my head—but then the concept was incomprehensible to me. However, it is not suitable for permanently structuring an IT department.”

BMW is not alone. Other enterprises are joining the voices, such as Peter Jacobs, the CIO of ING Bank Netherlands, who also has expressed similar views on bi-modal. To quote Jacobs: “I would rather work agile at my core bank system than at the channels.”

Source

Simplifying Through DevOps in Banking

There are a great many assets available on the internet that talk about what DevOps is and how far DevOps adoption has come. So we will not repeat the “what” and the “how far.” Instead, we will focus on the road ahead and future of simplification and transformation at banking organizations and the role DevOps is likely to play in the same—in all three areas of “Run the Bank (RTB),” “Change the Bank (CTB)” and “Transform the Bank (TTB).”

Figure 1: Change-Run-Transform Bank – POA with DevOps Maturity

In this article, we take a look at what shape we think DevOps-driven simplification in financial organizations can take in the near future and also some opportunities for such simplification. As depicted in Figure 1, with DevOps maturity, the siloed processes of CTB, RTB and TTB will come closer together, allowing common activities and processes to be carried out in one place. The others will get simplified and automated thereby reducing elapsed time of cycle from ideation to production and application management drastically. This will yield benefits not only in terms of cost and time, but also in terms of feature availability to business.

Banking Focus

In banking, the early adopters are leading the march with a lot of good practices such as standardized tooling for specific technology areas, orchestration, feedback loop setup, process and technology rationalization, etc. The focus needs to continue on the same.

We think that the challenge that DevOps architects need to now focus on is to be able to use DevOps to help financial organizations bring down the non-discretionary spend by simplifying the previously ignored maintenance, operations and infrastructure realms. The core of this DevOps driven simplification is needs to be move cost of maintenance to its lowest possible and hence releasing funds to discretionary areas in “Change the Bank” and “Transform the Bank.”

While the point of arrival for DevOps-led transformation should be “full automation” and “zero maintenance” but they are a bit far in the future because the return on investment (ROI) on the monies and effort required to make that happen needs to justify the same. We see the focus shifting to value delivered to business by transformation of an application or suite of applications. Figure 2 depicts the core repeatable activities of a DevOps-led simplification exercise while keeping in mind these improvement cycles need the setup of feedback loops at each stage.

Figure 2: Repeatable activities for DevOps-led simplification

This brings us to our next idea, that the road map to DevOps-enabled simplification needs to be paved with metrics and measures and these metrics cannot be randomly chosen. These metrics have to be actually measurable and be divided into two areas of business metrics and IT metrics as we explain below and need to be diligently tracked to aid course correction.

For example, release orchestration processes for applications are mostly complicated, need a lot of manual effort, have strict compliance requirements as well as security mandates, need to be carried out more frequently nowadays because of shorter deployment cycles and have to manage ever-increasing complexity of component interaction (upstream and downstream). Now, release orchestration is also a very important step in the entire CI/CD pipeline. So standardization and simplification of release orchestration process makes immense sense. But how does one measure success and quantify improvement?

As already stated, intelligent insights (metrics and measures) are important in this process for both business and technical team members so that everyone has single source of truth for current status and distance from end goal. This kind of automation also helps focus on the continuous improvement pillar of DevOps adoption by logging everything that matters, have frame-by-frame data for everything that helps. This collection of metrics aids analysis of success as well as failure thereby furthering the cause of continuous improvement.

Continuing with the example of release orchestration, there are many important best practices that help ensure that wheel is not reinvented. Some of the important learnings that only practitioners will be able to let you in on are as follows:

  1. Tools that need scripting will not be able to help you scale and create low-maintenance solutions.
  2. You need to invest in tools/orchestration solutions that help you visualize, get intelligent insights in a jiffy and can report in a variety of forms and shapes.
  3. Tools need to be able to capture and track business metrics, too.
  4. You need to spend effort on organization level compliance and security.
  5. Strive to automate your most effort-intensive and/or lengthy processes.

Each step on DevOps maturity scale will have many such learnings and best practices that must get shared across the length and breadth of the organization.

DevOps and Banking: Future Trends Prediction

Based on our understanding & knowledge of financial organizations, we believe that the following areas will see focus in the next few years:

Figure 3: Trends in short-medium term in DevOps adoption

Continuous Improvement within Federation

  • Technology teams will need to enable and encourage localized innovation within a federated boundary of toolsets that will, in time, bring Dev and Ops toolsets to near mirror images of each other.
  • By federation, we mean a detailed homogeneous blueprint that gives flexibility but within an overarching boundary of tools and procedures while encouraging continuous improvement.
  • This must also inform relevant stakeholders about areas for continuous improvement that make the most evolutionary impact to business and use metrics suitably to track and reward improvement.

Outcome Management

  • We also think that DevOps adoption needs to be driven by outcome management with evidence of impact taking the driving seat for road map management.
  • Focus will be on point of arrival (POA) and continuous measurement of business and IT metrics for outcome management.
  • We also believe that Culture-Automation-Lean-Measurement-Sharing (CALMS) model of DevOps will find more believers.

Evidence of Impact

  • Metrics and metrics-based course correction will become the Holy Grail in the simplification exercise. The metrics collection will need to focus on areas of both business and IT. The metrics and measures will need to be redefined as:
    • Business Metrics for business value delivered or improved such as new products / services launched, conversion rate, net promoter score (NPS), customer retention rate, CSAT, ESAT, etc.
    • IT Metrics for value improvement and value management such as cost performance index, reduction in defect debt, reduction in lead time to change, etc.

Design Thinking

  • We believe that the next big disruptor in this space will be bringing design thinking into the mix to ensure that DevOps-led automation is more customer-centric and iterative.
  • We see focusing on test data management (TDM) across the organization as an important step to ensure continuous delivery.
  • In our opinion, the testing process will undergo complete rethink with support of processes such as risk based priority and risk based test case design, test bed splits, TDM and so on.

KM, Training and Governance

  • Tools and processes for knowledge management, training and governance need to ably support the DevOps-led simplification.
  • These processes will need systemic focus to create and nurture self-learning technology teams.
  • Transformation of the way of working to rationalize processes such as collaboration for lean and improved handover of work packages.

DevSecOps

  • Security will be extremely crucial in the process of DevOps-led simplification. All the automation, cloud adoption, pipeline creation, IoT adoption and more is throwing up many new security concerns, thereby leading to a focus on DevSecOps.
  • Security processes will potentially need to move their execution points up in the CI/CD pipeline and be divided into stages based on iterations sizes and deployment cycles.
  • Focus on shifting left application security and role-based security.

Conclusion

The next wave of simplification of IT spaces in banking organizations will be aided by DevOps. To look for opportunities, a systematic approach toward DevOps will help—it will be perilous to ignore detailed assessment of existing spaces that will detail the current state, point of arrival simplified state, road map for simplification and the key guiding principles for the transformation.

Carrying out pilots and proof of concepts in DevOps is not a problem; the challenge is with scaling—in all areas: tooling, training and processes. The key is to manage the horizontal and vertical spread of same/similar toolsets that also help drive efficiency associated with scale and standardization. DevOps-led simplification will need changes to processes, technology and people management (culture). The current role structures might need to be demolished to ensure there are no fiefdoms and cross-functional teams handle all requirements of Dev-Test-Ops. Finally, all the changes need to be able to define the new way of working so that Change-Run-Transform organizations can be brought together.

Source

Instana Integration Helps Accelerate Application Delivery with Splunk

Instana App Delivers Problem Resolution Information Directly Into Splunk IT Service Intelligence

San Francisco – October 30, 2018 – Instana announced a new integration with Splunk ® IT Service Intelligence (ITSI), to solve challenges that can arise as organizations embrace a Continuous Integration and Delivery (CI/CD) application delivery strategy. The new Instana App delivers integrated access to Instana’s full-stack infrastructure and application monitoring information for Splunk users. Connected directly to Splunk ITSI, the Instana App for Splunk delivers Instana’s performance and configuration data, as well as events and automatic analysis of service issues.

“As organizations implement a CI/CD strategy, the continuous change of code and infrastructure causes performance visibility gaps in their dynamic microservice applications,” said Pete Abrams, Instana co-founder and COO. “Instana’s integration with Splunk goes beyond performance data to include application delivery events, service issues and problem analysis – which all become part of the integrated analysis with Splunk.”

Instana’s automatic application monitoring solution discovers application infrastructure and service components, deploys monitoring sensors for the application’s technology stack, traces all application requests – without requiring any human configuration or even application restarts. The solution detects changes in the application environment in real-time, adjusting its own models and visualizing the changes and impacts to users in seconds.

Available today on Splunkbase, the new Instana integration provides Splunk users with the most complete data and analytics to achieve observability of dynamic applications, allowing correlation of Instana’s data with other data in Splunk.

When service issues occur, automatic analysis takes over, leveraging AI and machine learning to isolate any issues, identify the probable cause of the problem and show users where to start their investigation. Service incidents, correlated events and the probable triggering event are all part of the information sent from Instana to Splunk ITSI.

“There are two related constants in today’s IT Operations environment – new technologies emerge and as they’re adopted, application complexity scales,” said Nancy Gohring, senior analyst for application and infrastructure performance at 451 Research. “The typical result is chaos for IT Operations, especially in the area of performance management. However, new tools and processes are available that maximize visibility while providing immediate feedback to the whole application delivery organization. The outcome is enabling teams to quickly identify and solve problems, even in these dynamic and complex environments.”

The Instana App provides unique value to Splunk ITSI customers via out-of-the-box APM and End User monitoring KPIs that enable faster MTTR, reduced downtime and increased productivity for DevOps teams. Unique functionality includes:

  • Application service metrics, including KPIs
  • Performance Data for the full application stack, including infrastructure
  • Flexibility to funnel any Instana metric into Splunk, including rich event data and problem resolution recommendations

With this broad set of combined performance data, event data, incident information and recommendations of where to look and how to resolve issues, AI Ops is becoming a reality.

The Instana App for Splunk is available today – learn more at ​www.instana.com​ and on ​Splunkbase​.

About Instana:

As the leading provider of Application Performance Management solutions for containerized microservice applications, Instana applies automation and artificial intelligence to deliver the visibility needed to effectively manage the performance of today’s dynamic applications across the DevOps lifecycle. Founded by Application Monitoring veterans, Instana provides true AI-powered APM to help organizations deliver high performance applications today and in the future. Visit us at https://instana.com to learn more.

Source

Cockroach Labs Launches Managed CockroachDB: The Geo-Distributed Database as a Service

Available on Google and AWS at Launch

NEW YORK, October 30, 2018 — Cockroach Labs, the company behind CockroachDB, the ultra-resilient SQL database for global business, today announced Managed CockroachDB, the company’s entry into the database-as-a-service (DBaaS) market. Managed CockroachDB significantly reduces the time to value for companies without expertise deploying globally-distributed database systems.

Demand for SQL databases with the capabilities necessary to serve customers located across geographies is growing exponentially. While CockroachDB has traditionally been deployed on-premise by companies looking to take advantage of its unparalleled resilience, correctness, and scalability, many are challenged developing the internal expertise to deploy and manage rapidly evolving new technologies.

Managed CockroachDB is the first cloud-neutral, geo-distributed SQL DBaaS, launching initially on Google Cloud Platform (GCP) and Amazon Web Services (AWS). It was developed to give CockroachDB’s customers across the finance, gaming, health, and retail industries a fully-managed database for building resilient, globally-scaled services. Features such as geo-partitioning minimize latency for users by locating data in close proximity to demand; row-level data-domiciling constraints ensure compliance with data localization requirements. These are just some of the capabilities CockroachDB provides which can make a critical difference for companies doing business in, or looking to expand into new markets under the constraints of fast-evolving data sovereignty regulations.

We’re launching Managed CockroachDB to meet the surging demand from customers who prefer to outsource the operational burden of deploying and managing a geo-distributed database,” says Cockroach Labs CEO and co-founder Spencer Kimball. “We’ve been seeing significant migration activity away from Oracle, AWS Aurora, and Cassandra, and we’re now able to get our customers to market faster with Managed CockroachDB.

The launch of Cockroach Labs’ geo-distributed DBaaS coincides with the release of CockroachDB 2.1, which delivers powerful migration tools for MySQL and PostgreSQL as well as significant performance and scale improvements. CockroachDB 2.1 has been tested to handle 5x as much transactional volume as 2.0 and more than 50x the transactional volume as Amazon Aurora, when compared using the industry-standard TPC-C benchmark.

Businesses investing in the future are choosing CockroachDB because it brings SQL into the cloud era. It accommodates growing businesses by scaling elastically, even across the globe, to where customers are, with unparalleled resilience. With this release, it’s now available as a fully-managed and cloud-neutral DBaaS.

“I don’t fundamentally believe in Swiss Army knives or solutions that solve every problem, but CockroachDB gets surprisingly close to solving a large number of relevant challenges that we see in the industry,” says Simon Kissler, Associate Director of Emerging Technology at Accenture.

About Cockroach Labs

Cockroach Labs is the company behind CockroachDB, the ultra-resilient SQL database for global business. With a mission to Make Data Easy, Cockroach Labs is led by a team of former Google engineers who have had front row seats to nearly two decades of database evolution. The company is headquartered in New York City and is backed by an outstanding group of investors including Benchmark, G/V (formerly Google Ventures), Index Ventures, Redpoint, and Sequoia. Learn more at www.cockroachlabs.com.

Device42 Launches Integration with Atlassian to Power Smart and Responsive IT Asset Management

NEW HAVEN, Conn., Oct. 30, 2018 /PRNewswire/ Device42, Inc., a provider of IT infrastructure management software, today announced an integration with Atlassian, the leading collaboration and software development management company. Customers of Atlassian’s Jira Service Desk, the IT helpdesk ticketing system, can now benefit from Device42’s comprehensive solutions to gain better visibility and control over their technology assets.

Jira Service Desk is a popular platform for IT teams to receive, track, manage and resolve customer requests. With Device42’s integration, every ticket sent to a company’s service desk is accompanied by the associated hardware and software specifications, as well as a history of incidents related to that issue. Having access to such information helps both onsite and offsite support agents quickly identify which technology assets to troubleshoot so they can immediately provide a solution, whether it’s replacing a hard disk or switching the power supply.

“Using the new cloud asset management integration with Device42, our customers will now gain a holistic view and more context across all their assets right within Jira Service Desk,” said Bryant Lee, head of partnerships and integration at Atlassian.

Device42 has more than 500 enterprise clients in 55 countries that rely on its software to manage their complex IT infrastructure quickly, easily, and effectively. The company was recently ranked as the fastest growing technology company in Connecticut, with a whopping 1,300 percent revenue growth from 2015 to 2017.

“Enterprises are faced with a mounting challenge as they attempt to maintain up-to-date inventories of their physical, virtual, and cloud servers,” said Raj Jalan, founder and CEO of Device42. “The integration will provide Atlassian customers with a robust tool that will not only help them solve mission-critical issues at breakneck speed, but also glean powerful insights about their technology infrastructures that will take their IT departments to a whole new level.”

About Device42

Device 42, Inc., a leader in Data Center Infrastructure Management (DCIM), delivers comprehensive, low-cost, high-value solutions that enable organizations around the globe to manage their complex IT infrastructure quickly, easily, and effectively. Device42 software centralizes data center management, making IT assets visible, understandable, and controllable. Additional information about Device42 can be found at device42.com.

Source

How automation will revolutionise the business world

Steve Thair Talking

Over the past few years, automation has become a hot topic in the technology world. It’s all about speeding up complex, time-consuming processes by reducing human input and simultaneously minimising the likelihood of human error.

This, of course, has led to mixed headlines in the media. While some people are worried that this technology will put them out of a job, others take a more positive stance. In the DevOps space, automation is viewed as a tool that can help organisations streamline the way they create and deploy software.

According to the 2018 Accelerate State of DevOps Report, automation underpins High-Performance IT organisations. DORA writes: “With more work automated, high performers free their technical staff to do innovative work that adds real value to their organisations.”

Instead of spending time on manual work, high-performing teams are consistently developing innovative and valuable products for users. They’re automating configuration management, testing, deployments and change approval processes.

In this short Q&A, DevOpsGroup Co-Founder and CPO Steve Thair will explain how organisations can automate the software deployment process to stay ahead-of-the-curve and keep the customer happy. He’ll also explore the role automation plays in enabling reliability and predictability.

1. How is automation enabling High-Performance IT?

Automation is a key enabler of moving faster. Unless you automate your software development, testing and environment processes, you’ll struggle to get applications out to market quickly.

You can’t do that if you have to wait several days for someone to manually test and approve the product. Organisations that are the most adaptable to change and that deploy their products quickly will stay ahead of the curve and outflank competitors.

2. How are DevOps and automation intertwined?

It’s fair to say that DevOps was born out of automation. Many of the early adopters of this practice were using automation tools such as Puppet, Chef and Ansible, as well as the automation capabilities of cloud vendors to become high performers.

The founders of these companies, such as Adam Jacobs of Chef, were heavily involved in the start of the DevOps movement too. They took the feedback from the DevOps community to incorporate this approach into their automation products. As a wider cultural movement, DevOps and automation assist each other and generate great benefits for companies.

By adopting the principles of DevOps – particularly flow, lean and better management practices – you get significant improvements throughout your organisation. But when you apply automation to the mix, you can take your digital transformation initiative to the next level.

3. What is the value of automation?

The key benefit of integrating automation into the DevOps process is the creation of a robust, secure and validated pipeline that gets code from source to running in production. What’s also important is that this pipeline puts telemetry and feedback loops back into the development process, allowing engineers to see how their applications are performing.

My personal view about the future of operations is that we won’t be worried about servers and individual release packets; our focus is going to shift up a level to owning, building and managing the pipeline and the orchestration of the production environment.

So rather than worrying about each release, we’ll care about the automated pipeline we’ve built. Having a pipeline that links development and operations inherently leads to better collaboration.

4. You see a lot in the media that automation will replace jobs. What is your view on this?

A lot of people say that every time an industrial revolution takes place, new jobs are created to replace the old. That’s certainly true. However, the people who get the jobs aren’t necessarily the people who lost the old ones.

Ultimately, you have to be flexible and be willing to evolve your career while acquiring new skills to take advantage of these new technologies. It’s essentially adapt or die. If you just keep doing things the same way, you’ll eventually be out of a job.

One of our consultants, Chris Taylor, frames this really well. You can choose to be on the bus and learn how to use all of these automation tools – creating increased value for yourself, organisation and customers.

Or you can choose to be off the bus, which is a matter of choice. The trick for management is being clear that people are either on or off the bus, not under it. Leaders should give people a clear choice and ensure they have the opportunity to acquire new skills.

5. Can you provide five automation tips for organisations transitioning to DevOps?

The first step is definitely Systems Thinking. Instead of trying to optimise each step of the pipeline, you should optimise the flow around it. All of the research out there – particularly Donald Reinertsen’s book The Principles of Product Management Flow – shows that the efficiency of a system is not about the parts but rather the entire flow.

When you’re designing something, don’t design the build, test and release processes on their own. Map the whole thing out, get everyone in the same room and collaborate on what it’s going to look like.

For the second step, I’d say everything in code where possible: database configuration, source control, infrastructure and build tools. For instance, if you’re using a program like Jenkins, you can leverage its built-in job builder to get configuration as code. But everything has to come back to being in the source code repository.

Third, think about APIs; in particular, you’ve got to get comfortable with REST APIs. Inevitably, you’re going to need to glue some elements together. The tools you’re using must expose a command line or API to support that as a code method.

Fourth, telemetry is crucial. Consider the data exposed by your automation process. You need to be able to look at the overall flow, but also the information you’re getting from each stage of the pipeline. It’s got to be easy to see if a build failed, or whether the release to production was successful on ten servers but failed on others. And you need to have processes in place to collect and measure data.

The fifth and final tip is an emerging area: thinking about compliance and security across your organisation. Both Azure and AWS are doing a lot of work with policy, and HashiCorp has the Sentinel Framework – which is essentially policy as code.

And, of course, people are going to look at these systems and will want to know how they give them separation of responsibility to meet their PCI DSS requirements. Things like single sign-on and role-based access control are becoming increasingly important because they underpin the ability to manage product environments and the applications that sit on top.

If you’d like to learn more about the fundamentals of DevOps, check out the courses on offer at the DevOpsGroup Academy.

Source