»News & Events

“DevOps lite”: transition your IBM i development to CI/CD

"DevOps lite": transition your IBM i development to CI/CD

“DevOps lite”: transition your IBM i development to CI/CD

DevOps helps businesses respond rapidly to market changes, by delivering higher-quality software updates to users more quickly.   But a shift to “full DevOps” can be challenging for IBM i teams faced with project deadlines, lacking the time window needed to switch wholesale to a new CI/CD model.

Our Webinar will demonstrate a simplified CI/CD toolchain that offers IBM i teams a progressive approach to DevOps adoption, based on Git, Jenkins and Jira, to gain:

  • Security in the deployment process
  • Reduced risk of errors in production
  • Minimized downtime risk

Smooth the transition to modern CI/CD tooling on IBM i.  Register Now!

Registration

The presenters

Nick Blamey

Nick Blamey

ARCAD’s Director of Northern European operations

Nick Blamey joined ARCAD from IBM where he was responsible for DevOps and Rational solutions in various roles across Europe. Previously Nick worked for other software development tools orgainsations including: HP / MicroFocus, Fortify Software (acquired by HP), Empirix (acquired by Oracle), Radview and Segue (now Microfocus). Nick is a thought leader in the areas of Static Code Analysis, Testing Automation, DevOps and Shift-Left strategies.

Floyd Del Muro

Floyd Del Muro

Technology and DevOps Advocate, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is currently Technology & DevOps Advocate for ARCAD Software, managing the IBM relationship and partnership with the IBM Cloud, IBM Systems and product managers for Rational Team Concert (RTC), Rational Developer for i (RDi) and UrbanCode (UC).  In his role at ARCAD Software, Floyd has been directly involved in the management of modernization projects on IBM i, from planning stages through to delivery, spanning modernization of the database, business logic and UI.  Drawing on his experience in project rollout and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, continuous delivery and test automation.

Copyright © 2018 ARCAD Software, All rights reserved.

    

2019-02-21T15:43:10+00:00 Upcoming Webinars|

Fixing the “last bottlenecks” of test automation for Legacy IBM i (AS/400) with ARCAD

By Nick Blamey | February 15th 2019

Bottleneck

How Test Automation combined with Batch Testing and Defect Analysis at the Database level can optimize your DevOps process.

With full end-to-end testing through the User Interface already established at many organisations, the challenge of “shifting-defects-left” in legacy applications still exists. Service Virtualisation is an option to test multi-tier applications with a legacy core but this level of abstraction can often leave a substantial body of mission critical application logic unchecked, presenting a significant risk to your project. This article explores a several options available to resolve these challenges.

A thought- inspiring blog by ARCAD’s Director of Northern European operations, Nick Blamey

The business problem

Time-to-market and the associated benefits of providing rapid feedback to developers are a key part of any DevOps strategy. Standard approaches of catching defects which occur at the UI layer – while important – still fail to eliminate a number of key challenges organisations face in particular in the following situations:

  • Where significant and mission critical application functionality resides in complex Batch processes
  • Where a number of legacy applications are involved e.g. on the IBM i / Mainframe which tends to result in a multi-speed development approach and complex defect triage effort between new application development team and the legacy core teams
  • When isolating any defects directly within the Database and understanding the impact of changes to application program have a direct effect on the Database level functionality
  • Where the demand for obfuscation and anonymisation of Test Data are part of the compliance process

Most organisations are embracing End to End UI Testing automation using a combination of the following solutions:

  • HCL One Test
  • Tricentis TOSCA
  • Worksoft Certify
  • Rational Functional Tester
  • Selenium
  • Mobile Application Testing Tools e.g. Eggplant, Perfecto Mobile etc.

Each Test Automation solution has advantages and unique capabilities to support different Testing Approaches but they all lack the ability to test, locate (triage) and fix complex legacy application errors buried deep in the legacy applications themselves on the IBM i (AS/400).

A different paradigm for testing and locating defects with Legacy Systems

We can analyse the 3 main challenges and potential bottlenecks to the process which cannot be eliminated by these standard tools as follows:

  • Batch Testing: Batch programs tend to be large scale mission critical application functionality embedded in batch processes combined with…
  • Complex Database and IBM i Spool file defects buried in the Database layer which results in exceptionally long debug cycles for developers to find and locate defects..
  • Test Data anonymisation challenges including continue Test Environment refresh to cope with GDPR and other compliance requirements

From ARCAD’s experience with some of the largest Legacy application estates, these challenges are becoming more and more complex to solve. The rest of this document will explain both why these challenges have a potential to limit the speed of your DevOps process but also how you can solve these challenges using ARCAD solutions.

Why are Batch Testing, Database Level Testing and Defect Triage each a major potential challenge and potential bottleneck to your development process?

  • There are many solutions to solve these problems but each one involves manual coding of a bespoke solution to fix e.g. Batch Testing with the inherent risk of maintenance of “roll your own software” in terms of both the knowledge required to create your own Batch Testing solutions but also the business risk of ongoing maintenance
  • Debugging any problems around Database level defects through the User Interface tends to burn massive development effort with the “triage of defects” embedded in multiple components being one of the key limiters of achievement of DevOps aspirations
  • Message Queue style Spool file defects tend to cause some of the most serious defects in Legacy IBM i applications which further exacerbate the problems.

The goal of the remainder of this article is to explain in detail both the options at your organisation’s disposal to resolve these problems, and the solutions available from ARCAD proven by multiple software development teams, typically developing on a combination of IBM i legacy applications and the front-end apps leveraging the business logic contained within the legacy.

“Unpicking the Spaghetti of Batch processes”: Common Defects from Batch are easily diagnosable in the Spools and Database

The most common defects created by Batch programs tend to be created by simple mistakes in the configuration of Spool and Database commands. However simple it is to “fall foul” of these simple but potentially “showstopping” defects, the effort most teams tend to expend in trying to reproduce them and fix them can massively limit the ability for a development team to make risk-free changes to applications where Batch, Spool and database edits form a crucial part of their functionality.

Typical Testing Challenges with Spool, Database and Batch can be classified as follows:

  • Requirement to hand-script a batch execution, and then further to write complex database code to verify that on completion of the batch processes, the correct tables have been updated accurately in the test database.
  • The need to confirm that application functionality remains the same when changes are made to the code which interacts with Spool and Database components. Programmers need to be able to understand, debug, fix and re-test complex functionality which touches these components. A typical example would be where the developer needs to check that a large table of data is “read in its entirety”. If this functionality contained a defect then an application could produce a serious defect if the range of data was limited to a subset of the data which can cause both application functionality defects but also impact the performance of the application.
  • Creation of an application specific testing framework with the inherent challenges of: hand coding error reporting functionality, test data refresh capabilities with the associated risk that the functionality of the Test Harness / Framework becomes more complex than the application code itself.
  • The creation of error handling functionality which runs the risk of either displaying too many additional errors (some of which can result in errors in the Test Framework / Harness itself) and then creating even more work for developers in fixing them: defect overload but even worse the risk that serious errors are then deemed to be part of the Test Harness which are then ignored.
  • Finally, the developer spending a great deal of time creating end-to-end tests with the associated documentation when a simple “recording” of how the users “actually” use the application would uncover a true and up-to-date view of the end-to-end tests which real users actually perform during their day to day usage of the application in production.
  • Further the ability to perform Test Coverage metrics for Compliance reasons (to show that all the application functionality which has changed in the latest iteration) has been fully tested and that 100% code change coverage has been achieved. The ability to exploit the functionality of the Rational Developer for i (RDi) IDE will become more and more important as the DevOps cycle accelerates and more and more application versions are pushed into production.

These challenges above can all be handled seamlessly and most importantly with zero programming effort using the ARCAD-Verifier Test Automation solution, described below.

DevOps requires Build Verification Testing (BVT) / Smoke Testing

To ensure that a “testable” application is passed to the QA team from development in rapid iterations many organisations create a suite of Build Verification Tests (BVT). This process is key to ensuring that defect tracking systems are not filled with “test environment unavailable” or “incorrect data in Test Environment” defects.

Smoke testing covers most of the major functions of the software but none of them in depth. The result of this test is used to decide whether to proceed with further testing. If the smoke test passes, go ahead with further testing. If it fails, halt further tests and ask for a new build with the required fixes. If an application is badly broken, detailed testing might be a waste of time and effort.

The risk to any organisation of NOT having a BVT / Smoke Testing process is a large number of additional defects being created, each of which need to be analysed, triaged and fixed even when the defects are actually NOT software problems but rather Build Verification problems.

The effect of lack of Build Verification Testing is that a typical defect classification Report Tends to show many defects classified as “cannot reproduce error” or “test environment issue” or “test data issue” thereby slowing down development time but also distorting the ability for Software development organisations and application stakeholders to make rational decisions about the severity of particular defects.

Problems Associated with Batch Processing

The biggest challenges facing IBM i developers is that a “batch” workload by definition interacts with and impacts many components / programs and is often critical functionality that runs “headless” i.e. in an unconstrained mode.

A batch job will always complete its workload in the shortest possible “low application usage” window to eliminate the impact on application performance. The challenge this creates for IBM i developers is that a batch process attempts to override any other workload running simultaneously because it will always prioritize execution speed over any resource balancing on the hardware it is running on. The only time a Batch process will lose priority is when its execution is limited by a system bottleneck i.e. normally the smallest Data Flow point in the application.

Because of the two problems detailed above, many developers look to perform scheduled batch jobs out of hours which can limit the functionality of the application whilst users wait for the “overnight batch jobs” to complete before performing their work.

Modern DevOps development teams on the IBM i have to constantly manage the following challenge: the need for batch jobs to execute quickly due to their “mission criticality” of their work and the reliance of other application functionality on their completion.

Batch testing: make sure that your batch performance does not cause defects downstream by non-completion

Whilst many Batch applications on the legacy were written many years ago, by definition they often contain mission critical functionality constituting a major part of your application’s value. For these reasons the continual enhancement of batch functionality presents a number of challenges:

  • There is a need for a DevOps solution to continually maintain the batch processes and associated Test Data and Test Environments.
  • Batch processes tend to require a deep understanding of the architecture of the entire application.
  • Batch processes tend to run in parallel meaning a failure in one batch process can result in exceptionally complex defect triage and fixes.
  • Batch processes can cause disruption to the performance and functionality of an application if they run incorrectly.

For this reason, many organisations attempt to create a totally seamless process for Batch Process change/commit and then attempt to make these changes available to the Test LPAR.  The effort involved in trying to perform Batch testing can then become punitive with the maintenance of the batch test process over-consuming development team resources and causing project delays.

Why direct your testing efforts onto Batch and Database level testing?

  • Defects in Batch and database are the most difficult to find, most costly and most risky to transfer into live production without end-to-end testing
  • Batch processes by definition are high-risk, typically a culmination of a number of transactions each creating value and each with a high potential impact. Manual rework of Batch processes in production tends to be the most difficult and expensive to fix.

How severe can Batch defects become?

Because of the complexity of testing Batch and their specific data and test environment requirements, batch failures have led to some of the most publicized incidents ever witnessed. A particularly high-profile incident was the prolonged downtime at RBS in 2015 – and a similar episode in 2012 – caused by batch process failures at the bank impacting hundreds of thousands of customers.

https://www.computerworlduk.com/it-leadership/how-management-failings-led-rbs-it-catastrophe-3586917/

Batch defects also tend to be neglected by many “end-to-end” test processes, but due to their mission criticality, their requirement to run and complete in a specific application usage window (normally when an application usage is more limited) and potential impact on the usability and performance of the application, certain defects in batch functions can “slip through the net” and reach production with disastrous consequences.

A particular concern with batch testing is Performance. If a batch process runs daily on close of business and application response time suffers, then batch performance is a factor to consider when creating a risk-based testing process.

https://www.softwaretestinghelp.com/which-defects-are-acceptable-for-the-software-to-go-live/

Testing a multi-tier application with Dependencies between legacy IBM i and new “front-end” components: what are your options?

The table below details the options available and benefits and pitfalls of each option, based on ARCAD customer feedback.

Options Available Advantages Risks Challenges common to each option
Don’t bother: monitor batches in production. Zero effort in development: Production defects becomes a testing and then operations problem, shifting it away from the root-cause which is developer defects. Defects occurring tend to be discovered out-of-hours therefore lengthening resolution times, re-execution of batch doesn’t fix the problem: defect becomes worse.
  • No prioritisation of most important batches and frequency of execution: one size fits all for batch monitoring.
  • Root cause very difficult to isolate; classification by tester.
  • Self-written Stubs and Testing tools eliminates efficiencies and presents a business risk.
Test at the UI and stub out legacy systems.
  • Classifies defect triage allocating defects to the correct middleware teams.
  • Eliminates Test Data challenges as the “stub” can respond in any way the user configures it.
  • Allows load to be thrown at stub mimicking response times and eliminates the legacy system from the process.
  • Impossible to re-write application functionality in the stub hence edge testing becomes impossible without the IBM i LPAR and Test Data refreshes.
  • Too much coding required to understand impact on user data post batch execution. E.g. credit check.
  • EDI problems become more and more complex when working with multiple 3rd party and inhouse components.
  • Edge cases too difficult to triage and attempting to perform testing automation without the IBM i in the correct state particularly with batch process becomes risky and impossible to locate all defects.
Hand script Batch execution and then write specific code to verify correct update to and contents of the test database. A highly configurable  “self-written” solution controlled and maintained by your in-house development team.
  • Requires massive programming effort, specialist skillsets.
  • Takes developers away from their day to day value-add development tasks.
  • Does not cope with changes in the database layer during batch program execution.
ARCAD-Verifier
  • Optimises your current end-to-end UI functional testing with dependency builds, mining value from the ARCAD repository, which limits the number of tests to be executed.
  • Tests the entire application flow for the IBM i.
  • Easily creates Source Code Analysis (with ARCAD CodeChecker) and Unit Tests (with ARCAD-iUnit) to complement UI Tests with ARCAD-Verifier.
  • “Under the bonnet” tests of Database, Spool files and Batch process execution.
  • Can be used for profiling of application performance with ZERO effort i.e. which component of your IBM i application has slowed down or speeded up in the latest build.
Minimal but requires investment in ARCAD: normally accompanied by compelling business case based on the “shift left” ROI Metrics which ARCAD can supply.
  • Requires deployment of ARCAD Verifier, and up-front one-off time spent in recording test scenarios.
  • Delivers additional value in optimization of the release process. Verifier can effectively eliminate the risk of passing poor code to the testing team, thereby elimination of release pain, risk and cost thanks to earlier detection of defects. This value derives from ARCAD’s dependency-based testing using the cross-reference information mined from the ARCAD repository.

Additional Value of the ARCAD-Verifier Solution

ARCAD-Verifier is unique on the market in its ability to handle batch processes. The additional value delivers your organisation a seamlessly integrated solution for eliminating the testing bottlenecks in your process and ensuring a “frictionless” DevOps CI/CD pipeline.

ARCAD-Verifier can check a process was executed to completion by automatically running tests on the batch process, re-setting the test data and then re-iterating the test to check for defects.

EDI and Batch-specific capabilities:

ARCAD can seamlessly simulate the insertion of files into any database running on the IBM i connected to the database(s) of external companies or organizations.

By simple creation of batch tests through ARCAD-Verifier scenario recording, test data refresh and rollback, development teams can dramatically can improve the time wasted in the debug process whilst directing the correct development team to the exact “root cause” of the defect.

Then by using ARCAD-Observer developers are able to more rapidly understand the application structure and cross-reference dependencies, adding value to the Development Effort when attempting to isolate and debug complex batch process defects.

Just as any IBM i application development, batch-based logic also requires a seamless development cycle. However, in the context of batch, the need for developers to rapidly perform Impact Analysis and identify Dependencies between application components and batch file output becomes even more important. ARCAD for DevOps offers additional capabilities to further eliminate these potential bottlenecks to the development, testing, debug and re-deployment of Batch processes on the IBM i.

Build Verification Testing (BVT) easily automates rapid developer feedback and eliminates defect triage challenges.

Modern development processes and DevOps requires Smoke Testing to be executed rapidly against each build once a developer commits code and deploys to the test environments.

UI Testing and Test Automation: Most of it is below the surface!

Many modern development teams have traditionally relied on the creation of Functional Tests and Load tests which focus on the user interface (UI). With Legacy applications, a significant proportion of the application functionality is buried deep in the IT infrastructure and hence the associated risks increase and the defects uncovered become especially expensive to locate and triage.

Expensive defects are more likely to arise when the changes in batch processes, spool files, Database updates occur deep within the IBM i application structure.

The limitations of functional testing

Most organizations perform functional testing of their applications through the UI, required for compliance reasons.  However, like black box testing, since each and every defect needs to be diagnosed from the UI, this approach brings little information to help developers to actually fix problems.  The result is typically a constant stream of defects classified as:

  • cannot reproduce the issue:
  • test environment not set up correctly
  • require more information

EACH OF THESE are: especially complex with Batch processes and Test Environment state issues

With poor information for developers, the challenges are pushed downstream.  Projects face delays due to lengthy code understanding and a sub-optimal debugging approach. This severely limits the ability for any IBM i development team to maintain the “speed of delivery” required for meeting their DevOps targets.

iceberg Test Automation

The diagram above illustrates a typical “complex” defect which can result from a change to a spool file, a batch process impacting other programs running on the IBM i and associated integration points with 3rd party components and other teams not associated with the IBM i.

Many organisations try to eliminate this problem by re-focusing their efforts on the hand-coding of functional tests at the UI level and hand-coding Batch tests and database-related checks, but this manual effort is costly and time-consuming.  Further these “in-house” built batch, database and spool file tests can actually become (rather than resolve) the Testing Bottleneck due to ongoing maintenance of these assets.

Many clients now advocate the use of ARCAD tooling to specifically eliminate this potential risk to your frictionless DevOps process.

The limitations of UI Testing

The graphic below depicts the risks of focusing testing effort on the User Interface.

A reliance on UI-based testing leads to increased cost and project risk due to the challenges of isolating complex defects to their root cause deep within application components and architecture.  In practice, with this method, developers typically take delivery of a defect report from a Defect Tracking system and then spend a high percentage of their time trying to localize and reproduce the defect. This causes the “time to resolution” to dramatically increase and results in a continual “reclassification of the defect” as the defect moves from team to team with each developer unable to resolve it.  Defect Tracking Solutions like HP Octane / HP Quality Center collect metrics on these defects and tend to show an increase in the “average time to resolution” for most IBM i defects.

In addition, for development teams who are required to perform a full end-to-end test of their application for each and every component change, the continual process of debug, code change, re-deploy and re-test cycle can dramatically increase testing effort and cause project delays.

Without ARCAD

Getting under the bonnet of your Database to trap defects

If we think of any application as a motorised vehicle and the data as the Fuel which powers the vehicle then a large proportion of any testing effort needs to be on the “flow” of the fuel (i.e. DATA) through the application itself.

If an engine stopped working whilst on the race track, the engineer would of course be interested in areas of testing which focused on blockages in the fuel flow. They would diagnose the problem directly in the Engine looking for the blockage.

No engineer would try to look at the vehicle’s dashboard (i.e. User Interface in software terms) to understand a fuel blockage in the Engine.

Using this analogy in Software with Legacy applications, Software engineers should direct their efforts to the flow of data into databases and then into the application logic and this presents the following challenges to the Testing Process for legacy applications:

  • Firstly, test data management: Nothing is more volatile, difficult to reinitialize and difficult to compare than test data,
  • Secondly, scenario maintenance: Scenario creation is not everything. They must be then maintained in full synchronisation with the application versioning itself.

ARCAD Verifier delivers Optimization of your Testing Process

The diagram below explains the additional value which ARCAD can deliver to your testing process with no additional effort.  As developers create Unit Tests on individual programs and functional tests at the green-screen or UI layer to check their work, the combined suite of “assets” can be converted into a suite or campaign of Regression Tests for execution against each and every version of the application following compilation and automatic deployment to your testing LPAR.

In addition, ARCAD Verifier can “mine” the data stored in your ARCAD repository to limit the number of end-to-end tests required – saving substantial time and effort – by only needing to execute the tests which are impacted by changes to the individual programs.

ARCAD unique capabilities

With ARCAD Verifier, the value of the application metadata stored in the ARCAD repository about your specific code base ensures that the number of tests to be run after each code change is limited to only those impacted by that change.  Defects are diagnosed back to their “root cause” in individual components, and application changes can be re-compiled, re-deployed and re-tested at the “touch of a button” delivering a frictionless DevOps CI / CD pipeline.

Finally ARCAD Verifier fully integrates into Rational Developer for i  (RDi) and the Test Coverage capability to create a fully compliant documented code coverage audit process with zero effort. Tests at the Spool, Batch or Database level are created automatically, and the precise origin of defects is isolated to deliver a fully optimized “shift left” of defects which can form an integral part of any organisations “continuous improvement” process.

Continuous Test as part of an automated CI/CD pipeline

DOT-Verifier can be integrated into Jenkins via plugins. In the framework of setting up continuous testing processes, Jenkins can steer performance of DOT-Verifier scenarios and the delivery of comparison reports. DOT-Verifier can also be integrated into most non-regression testing (NRT) tools on the market, in which case it supplements the reinitialization and comparison part of test data.

ARCAD Testing solutions

ARCAD The graphic below highlights the capabilities ARCAD offer for testing optimisation.

ARCAD Verifier and ARCAD for DevOps can take packages of applications through the development cycle and deploy them into the test environments on the IBM i LPARs in a seamless cycle. Tests on all components can be executed as described above isolating defects at the Database, Batch, 5250 UI, Spool and cross-reference layers with minimal effort. Any packages which pass their quality gate tests can then move seamlessly into Production.

DOT Anonymizer from ARCAD provides anonymization – or masking – of production data mined from the Production LPAR to deliver a secure, flexible and GDPR-compliant Testing solution.  ARCAD for DevOps then delivers anonymized data to the test environments for use by testing teams, whilst maintaining the “homonymity” of the application data so that “edge case” testing can be performed in order to maximize the number of defects discovered.  Results are then pushed automatically into your enterprise defect tracking/ticketing system, such as Jira, HP Quality Center, etc.

ARCAD-Verifier: “Under the bonnet” defect triage spool file debug and data debug

Read more about the business value of continuous testing for optimization of application debugging on IBM i in our last blog.

https://arcadsoftware.com/news-events/blog/test-automation-and-source-code-analysis-for-ibm-i-why-bother-enforcing-a-new-quality-gate/

Recommended Next Steps:

An Audit of your current IBM i Testing process is an excellent starting point to your journey to a quality DevOps implementation.

To find out more about how ARCAD solutions can optimize your Testing Process, please contact your Sales Representative by emailing:

Sales-eu@arcadsoftware.com

Further reading:

Definition of BVT and Smoke Testing:

https://www.softwaretestinghelp.com/bvt-build-verification-testing-process/

http://softwaretestingfundamentals.com/smoke-testing/

Shift Left Testing definition:

https://en.wikipedia.org/wiki/Shift_left_testing

Nick Blamey

Nick Blamey

ARCAD’s Director of Northern European operations

Nick Blamey joined ARCAD from IBM where he was responsible for DevOps and Rational solutions in various roles across Europe. Previously Nick worked for other software development tools orgainsations including: HP / MicroFocus, Fortify Software (acquired by HP), Empirix (acquired by Oracle), Radview and Segue (now Microfocus). Nick is a thought leader in the areas of Static Code Analysis, Testing Automation, DevOps and Shift-Left strategies.

2019-02-19T12:34:31+00:00 Blog|

One Repository To Rule The Source – And Object – Code

One Repository To Rule The Source – And Object – Code

The concept of a single repository for source is not necessarily a new one. When I interviewed with ARCAD back in 2011, I did so at the at the Rational conference called Innovate in Orlando. The research and development team and our chief technology officer were already in dialogue with IBM to resell ARCAD technology alongside its Rational development suite, adding power to Rational Team Concert that development organizations could effectively have a similar repository for IBM i and open source applications.

At the time, RTC supported the open source world very well, just like Git 

Read this Article
2019-02-05T17:16:17+00:00 Press Articles|

Should we be using Containers in production?

By Joseph-André Guaragna | January 21st 2019

Conteneurs : faut-il déployer en production ?

Container adoption is growing exceptionally fast.  451 Research predicts a 40% annual growth rate in the application container market, reaching $2.7bn by 2020. According to Gartner Research, « By the year 2020, more than 50% of companies will use container technology, up from less than 20% in 2017 ».

Although the uptake of containers has been swift in development environments, container use is significantly more rare in production. Research led by Diamanti shows that of those that have already adopted container technology, 47% plan to deploy containers in production, and 12% have already done so.

In this article we examine the rise of containerization, the potential gains from the technology, and means to extend their use in today’s typical heterogeneous environments.

1. What is containerization and why is it becoming so popular?

The initial concept of containerization emerged in 1979, during the development of Unix V7, with the introduction of the “chroot system” (Aqua). Its main goal is portability: “build once, deploy everywhere.”
Containerization makes it possible to isolate an application in a sort of prison (borrowing the same analogy as the “Jails” of BSD). In essence, containers provide a system of resources visible only to the process, that does not require the installation of a new OS since it uses the kernel of the host system.
The containers therefore run natively on their OS, which they share with each other. Applications or services are consequently much lighter (a few MB on average compared with several GB for VMs), enabling much faster execution.

The creation of Docker in 2013 popularized the concept of containerization by making it much easier to use and offering a complete container management ecosystem.
– Docker integrates perfectly with the concept of DevOps, especially in the area of versioning: development and production are carried out in the same container. Put simply, if the application works on the Dev side, it will also work on the Ops side. Unlike a VM or a traditional application, there will be no side effects due to installation or a specific configuration needed in production.
– Resource cost is another key factor behind the popularity of containerization. As a basis for comparison, a machine capable of running 50 VMs will be able to host 1,000 containers.
– The speed of starting a container is also a major benefit, as it does not contain the OS: only a few seconds, as opposed to over a minute for a VM.
– Lastly, orchestrators such as Mesos DC / OS or Kubernetes have emerged, to automate the deployment and management of containerized applications. These solutions bring high levels of scalability, responsiveness and elasticity, critical when handling sudden peaks of activity to meet business needs such as Black Friday for example.

2. In which technology environments is containerization the best fit?

Containerization can be applied to all types of technology, but is an ideal fit for managing web applications, especially in Linux environments.
It is also used in front-end development and middleware, but so far very little for back-end technologies.
The principal reason being that databases are optimized to interact directly on the hardware, and containerization would therefore bring no gain in performance.
Containerization is also valuable in “Canary Deployments“, a strategy for deploying versions to a subset of users or servers. The goal is first to deploy the change on a small number of servers, test the change and monitor the possible impacts, before extending the change to the remaining servers.
Kubernetes, a container orchestration system offered by Google to the open source community, implements deployments in a standard or advanced way via tools like Istio, an open technology for connecting, managing and securing microservices at scale.

3. Who are the main containerization players?

Even though the concept of containerization was inspired by solutions like Chroot, FreeBSD Jails or LXC (Linux Containers), other players dominate the market today. Among them are Docker, mentioned above – the undisputed leader in containerization, – but also Rocket Core OS (recently acquired by Red Hat and renowned for its security), Canonical LXD or Virtuozzo OpenVZ, the oldest container platform.
Supporting these container solutions we also find the key orchestration players such as Kubernetes and Swarm (from Docker), and the Mesos DC/OS platform.

4. What are the obstacles to adopting Docker in a production environment?

In small organizations, a reticence to use Docker in production can be due to be a skills gap both in the use of Docker itself and in the orchestrators.
When using Docker, applications are “stateless” because of the microservices architecture used, while most applications, even n-tier, are “stateful“. For this reason, an adaptation to the software architecture is often needed before using Docker in production.
And there is a further impact: the application is no longer controlled in the same way. The use of Docker generates a cloud of applications, in which the links between services must be taken into account.
This is a new paradigm for both systems administrators and developers: it requires new tools to understand “live” how these communications are realized, in order to resolve bugs. Solving these problems is therefore much more complex.
The software architecture is impacted, as is the hardware architecture, which must be able to handle very large log volumes to achieve this.

5. How to solve security problems when using containerization?

The main role of solutions like Docker is in the running and managing of containers, but they are rarely deployed as-is in production. Usually, they are used in conjunction with container orchestrators, designed to manage multiple machines.
Orchestrators also offer specific services that will directly address containers created under Docker.
These include security features such as PKI for certificate management, or CNI for network management.
Orchestrators also provide high availability, management of sensitive data, and guarantee container isolation.
These tools are rich and relatively complex, requiring specific experience, explaining why very few of them are deployed on-premise.

6. Deploy your containers with DROPS!

As a release orchestration solution, DROPS can, like other solutions, interact with a Kubernetes cluster in order to send the various images produced in development to a production registry, be it On Premise or Cloud, such as AWS, Azure or IBM Cloud.

But the main advantage of DROPS lies in its ability to orchestrate all types of deployment in a heterogeneous environment.

Non-intrusive, DROPS works with all types of Orchestrators, utilizing the underlying features of the orchestrator itself.  It relies on communication tokens and therefore does not require the installation of a plug-in.

In this way, DROPS is able to secure the deployment, update and rollback of Legacy, On Premise, Cloud or container applications in the same environment.

With DROPS, the process of deployment is comprehensive and consistent across all applications, regardless of the underlying platform, leveraging the Orchestrator infrastructure and tools.

Multi-Platform & Open Source Development Tools in a Traditional IBM i Environment White PaperWhite Paper « Drops for DevOps »

Drops for DevOps

White paper

With DevOps, change and improve the relation between Development and IT Operations so as to create a better collaboration among them!

Download the White paper
Drops Datasheet

Drops

Datasheet

DROPS offers a unique solution for the management and automation of application and systems software deployment across multiple platforms – including IBM i, Windows, UNIX, Linux, and System z.

Download the Datasheet

photo joseph

Joseph-André Guaragna

Pre-Sales Consultant, ARCAD Software

Joseph-André began his career as a web developer 12 years ago in the Open Source world. He then specialized in distributed infrastructure management as a System Engineer. During his experiences, he has acquired solid knowledge in automation and continuous integration and on DevOps technologies. He is currently working on the technical pre-sales phase of ARCAD Software’s DROPS solution.

2019-02-05T10:11:12+00:00 Blog|

Automated Testing for IBM i DevOps: How to accelerate and protect your IBM i DevOps pipeline

Automated Testing:  How to accelerate and protect your IBM i DevOps pipeline

Implementing a DevOps approach means deploying source code changes to production faster and more frequently.  The “flip side” of frequent delivery is the increased risk of a defect reaching production and an outage that could impact the credibility of your organization.

This is why successful DevOps implementations integrate test automation as part of the CI/CD process.

Watch this Webinar to learn how to safeguard your IBM i application availability, with automated regression test:

  • Record test scenarios using an easy-to-use emulator
  • Trigger the replay of test scenarios on each commit
  • Drive smart dependency-based test campaigns
  • Report on defects at their root cause

Watch the Replay

The presenters

Ray Bernardi

Ray Bernardi

Senior Consultant,  ARCAD Software

Ray Bernardi is a 30-year veteran involved in the System 38/AS/400/iSeries/IBM i development and currently is a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.  In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several IBM i publications on the subject of application analysis and modernization, SQL, and business intelligence.

Floyd Del Muro

Floyd Del Muro

Technology and DevOps Advocate, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is currently Technology & DevOps Advocate for ARCAD Software, managing the IBM relationship and partnership with the IBM Cloud, IBM Systems and product managers for Rational Team Concert (RTC), Rational Developer for i (RDi) and UrbanCode (UC).  In his role at ARCAD Software, Floyd has been directly involved in the management of modernization projects on IBM i, from planning stages through to delivery, spanning modernization of the database, business logic and UI.  Drawing on his experience in project rollout and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, continuous delivery and test automation.

Copyright © 2018 ARCAD Software, All rights reserved.

    

2019-02-20T12:17:08+00:00 On-demand Webinars|

GIT a Jump in Your Step

by Floyd Del Muro | January 25th 2019

The rise of Git

Reflections on the rise of Git and the DevOps paradigm on so-called legacy platforms IBM i and z/OS – by Floyd Del Muro, Technology & DevOps Advocate

In my experience, which started back in 1988 on the System 38, nothing has dominated the development landscape since Y2K like GIT.  As an advocate of DevOps, virtually every day in my professional life, I have discussions about Git. Is that happening to you as well? Last night, I met a few young IT professionals from Nepal during a social hour for cocktails and dinner.  The first young man, about 30-something years young, is a product manager for a startup healthcare company. Their new Python-based solution is meant to provide digital transformation for hospitals countrywide from the solutions and manual efforts of today.  Always probing, I asked, “So, what do you do for source control, versioning”. His response, “GIT” through a GitHub repository. A second young man at the table states he is a Java developer for a major office supply retailing company. I asked him the same question and his response was nearly identical.

If you know me, I know what you are thinking – “Floyd, you frequent many technology events annually in your travels so it is not a big stretch to have a geek-fest conversation with a few young open source developers”.  I could not agree more! It is fairly common for me to talk with a mixture of young and senior developers collectively during professional conferences. The reality is this did not happen at an organized technology event. All of this GitHub and DevOps dialogue took place at a restaurant in Kathmandu, Nepal.  The event was a meet-and-greet for family and friends of Prawal, the groom to be at an upcoming Hindu marriage. This collision of culture, code and technology starts when my brother hired Prawal eight years ago for an electrical engineering position in the Boston area. This led to an invitation for my brother’s family to attend the wedding, extended family counting me. Needless to say, I was more than pleased to accept the invitation and adventure.

What is my point you ask! Git continues to gain traction in the development world even in developing countries like Nepal. Our strategic business partner, GitHub, boasts 32 million developers using their social development platform modeled around a Git repository. I talk daily to programming managers and developers about how they have the need to deliver new features and functions to the business faster than ever imagined before. Innovative ways to engage customers or prospects through new touch points, web interactions and mobile applications. These new user interfaces often require integration with the critical business data that, in our world, resides on the IBM i. It is this clash in development workflow that is causing stress and fracturing of any current workflow process, tooling and lastly people. There is a tremendous need for a platform that is scalable, secure, reliable, yet quick that can manage software modification risk. This concept of DevOps to standardize workflow of software change on a common stack of tooling for business-critical source code assets could cause even more havoc in an already hectic world, if attempted without a depth of understanding of the challenges both real and perceived. In addition, you still need to provide the needed financial value in terms of implementing, training and return on investment. Delivering on all these promises is what makes DevOps very appealing to companies of all sizes.

Why Git? From the start, it was designed to take into account this new collaborative paradigm of development. It includes the needed productivity and independence of code modifications for today’s programmer, yet still provides the needed structure, security and compliance mandated by the business environment today. It provides the ability to function over disparate computer languages, developer demands, and distributed teams without sacrifice in providing the integration, speed and flexibility to drive the business and market changing innovation. It is a new way to work independently yet collectively and manage the ever-present failure and risk that exists in the IT world. Git coupled with a platform like GitHub offers unbridled benefits. Autonomous development, collaboration, dynamic code merging, scalability and controlled yet seamless visibility are just a few of the reasons for its massive adoption rate and cultural movement.

Git may not be in the stars for ALL legacy companies running on IBM i or Z. However, if you are challenged with your current load of hybrid development, then Git may be your answer. There are challenges you will need to understand to reach maximum velocity and success. The reality is that most of them are cultural and not based on technology or tooling. You can manage RPG or COBOL code almost identically to any other programming language-based efforts today. It will eliminate the existing challenges you experience and perceive today, providing the needed and unknown scalability for tomorrow. The best result will be that business leaders will have a greater understanding and acceptance through the visibility and speed in which new ideas can be introduced without the disruption or complete dismissal of the existing business model.

So GIT excited and take the steps needed to jump start your business now and for the future.

DevOps for IBM i White Paper

DevOps for IBM i

White Paper

Implementing a DevOps strategy on IBM i?  Read our White Paper!

Download the White Paper
Enterprise DevOps

Enterprise DevOps

White Paper

This paper attempts to debunk competing DevOps concepts, terminologies and myths in order to help make the path forward clearer and more practical.

Download the White Paper

Floyd Del Muro

Floyd Del Muro

Technology and DevOps Advocate

With 26 years of experience on the IBM midrange platform, Floyd is currently Technology & DevOps Advocate for ARCAD Software, managing the IBM relationship and partnership with the IBM Cloud, IBM Systems and product managers for Rational Team Concert (RTC), Rational Developer for i (RDi) and UrbanCode (UC). In his role at ARCAD Software, Floyd has been directly involved in the management of modernization projects on IBM i, from planning stages through to delivery, spanning modernization of the database, business logic and UI. Drawing on his experience in project rollout and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, continuous delivery and test automation.

2019-02-05T10:17:04+00:00 Blog|

An introduction to Data Masking – Infographic

by Olivier Bourgeois | December 26, 2018

Data grows continuously, and data breaches concern all enterprises. So Regulations are emerging by focusing on data protection & privacy. Discover how Data Masking can solve these challenges.

Infographic - An introduction to Data Masking
DOT-Anonymizer Datasheet

Dot Anonymizer Datasheet

Datasheet

The anonymization of personal data is an ever-sensitive subject. DOT Anonymizer helps you protect the confidentiality of your test data. This document will show you how.

Download the Datasheet

2018-12-31T13:04:22+00:00 Blog|

3-part webinar series: Git on IBM i

3-part webinar series: Git on IBM i

“Git Started: Using Raw Git on IBM i”

Thanks to open source technologies, IBM i systems can do more than ever! What does it mean to you? In this webinar, we’ll explore git and the surrounding open source ecosystem, and what it can do for your business.

“Git Going: Social Coding on GitHub Enterprise”

In today’s competitive world, software developers and organizations are being increasingly challenged to deliver innovative solutions, or be left behind by their competitors. Developers have defined software development as one that is a collaborative practice, in which developers from all over can contribute to code. GitHub helps developers collaborate on projects, as well as providing tooling automation to allow Developers to decrease the amount of context switching when delivering innovation. In this webinar, we’ll discuss the value proposition of GitHub, the changing notion of the software developer, and considerations around building a modern software practice, to not only attract top talent, but to also lay the groundwork for innovation to flourish!

“Git Ahead: IBM i DevOps cycle with Git”

Git is an integral piece to a complete DevOps cycle, a cycle you can easily tie together with tools aware of the intricacies of the IBM i platform. You may like the collaboration git provides on its own, and also want the social features GitHub brings, but how can all that be integrated on a platform which, to be frank, wasn’t built with agile in mind? It is possible to have the best of both worlds – an agile workflow which greatly increases productivity, and the stability that the IBM i platform brings. This session will show you things you may not have thought possible, such as taking IBM i native code from a GitHub repository, building it directly on an IBM i, and deploying the objects to an automated test environment. But it doesn’t stop there! We will examine a complete CI/CD workflow, from a feature branch all the way to a production deployment. All of this and more is possible with a tightly integrated toolset.

Watch the Replay

The presenters

Jesse Gorzinski

Senior Architect at IBM

Jesse works for the IBM i development lab in Rochester, MN. He is the Business Architect of open source technologies. Jesse, who was doing RPG programming at the age of 18, is an expert on application development on IBM i, as well as system access and modernization. His hobbies include playing with his dog, replacing complex applications with 5-line Python programs, and advocating for the use of new technologies on IBM i!Prior to his 2006 employment at IBM, Jesse worked with the AS/400 as an I/T administrator for an IBM customer in the finance/mortgage industry, where he specialized in data backup/recovery, process optimization, and information integrity. He has a Bachelor’s degree in Computer Science as well as a Master of Business Administration degree.

Christian Weber

Senior Solutions Engineer at GitHub

Christian is a Remote-First, Solutions Engineer at GitHub. After spending the first chapters of his career in Financial Technology, Christian now focuses on assisting both cultural and digital transformation in large, medium and small software organizations. In his spare time, Christian enjoys playing guitar, listening to loud music, and zoom-zooming in his Miata!

Ray Bernardi

Ray Bernardi

Senior Solutions Consultant

Ray Bernardi is a 30-year veteran involved in the System 38/AS/400/iSeries/IBM i development and currently is a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.  In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several IBM i publications on the subject of application analysis and modernization, SQL, and business intelligence.

 

ARCAD Software Inc.
70 Main Street, Suite 203
Peterborough NH 03458
USA
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
ARCAD Software Asia
c/o Pramex International Limited
1 Austin Road West International Commerce Centre
7107B 71/F Tsim Sha Tsui HONG KONG, Yau Ma Tei
Hong Kong
sales-asia@arcadsoftware.com

 

5 keys to IBM i Modernization Success

by Alexandre Codinach | september 27th 2018

Successful IBM i application modernization projects are those that find the right balance between IT and business objectives.

These objectives can take the form of:

  • Improved system maintainability, flexibility, and scalability
  • Adoption of new tools and methods of development
  • Reduced risks and operational costs
  • Reduced time to market
  • Improved customer satisfaction and productivity
  • Easier hiring of skilled resources

Whatever the reason for a modernization project for a legacy system like IBM i, it is important to identify some key points for the success of the project:

1. Obtain backing from general management

Whatever its scope, a modernization project is a business project that goes beyond IT issues alone.  The stakes relate to the performance of the company, its development and sometimes its survival, although the subject matter may be somewhat obscure to the layman.

Tip:  “Popularize” the modernization project by conveying the business value associated with the technical gain.  Translate the technical argument into a business argument, and weigh any short-term impacts against the Return on Investment at the end of the project.  Secure management backing right at the start through an understanding of the business value gained from modernization and the risk of inaction.

2. Define an overall modernization roadmap

In such a project, not everything can, or must, be modernized.

We are dealing not with modernization but modernizations (in plural). The approach must not be “manichean”. Techniques like modernization of the existing system, reengineering and / or software packages are not necessarily incompatible.

The “silver bullet” from new to modern does not exist. Complete renewal within 3 years is a fantasy. Modernization is a continuous, staged process, which must interleave quick wins and longer term goals.

Tip:  Plan regular communication points so that everyone in the organization visualizes and understands the issues. Including resources from the business side and defining clear business indicators will help this process.

3. Involve staff early, to include all impacted parties

Just like any IT project, even when it outsourced, modernization consumes staff resources.

Over and above the technical side of the project, it is important to take into account an overall change management process within the organization, from IT right through to the business users whose interaction with the application can be changed significantly in such a way as to impact their daily work.

Tip:  Involve impacted staff right from the analysis phase of the project, to participate in the decision making process and be the first lever of communication with the teams.

4. … Secure through automation

As work is underway, business must go on: modernizing must NOT mean putting projects on hold and ceasing to deliver new features needed by the business lines.

Automating your application lifecycle reduces risks and increases the productivity of IT staff by allowing them to focusing on value-added work. In the end, this means it will be easier to allocate resources.

Continuous integration and deployment (CI / CD) will help you reduce development times and secure the reliability of applications in production.

5. Test for non-regression

Often it is only internal teams and sometimes even only the business users that are able to provide useful scenarios for regression testing. Prepare these scenarios carefully before the project.

You must be able to verify that the modernization process, however wide-reaching, has not resulted in unexpected side effects that could degrade the operation of your application.

Run these tests again during the modernization project and check for errors.

Finally, if you do use external teams for all or part of your project, ensure that a non-regression guarantee is included.

It is vital to ensure that the system will continue to meet requirements.

Tip:  Benefit from the investment in testing needed for this project to bring long term improvements in your company’s testing process.

Conclusions

  • Communicate on, and build support for, your project
  • When defining the scope of your project, run a functional audit in addition to the technical audit
  • Anticipate the staffing needs to complete your project
  • Secure the project through automation, to ensure application availability for your end users
  • Check that the system continues to meet requirements using automated regression testing
Modernization as a Service White Paper

Modernization as a Service

White Paper

This paper examines the problems associated with maintaining often mission critical IBM i (aka iSeries, AS/400) legacy applications on IBM Power systems.

Download the White Paper
Enterprise Modernization for IBM i

Enterprise Modernization for IBM i

Brochure

“Through enterprise modernization, IBM i organizations can leverage their competitive advantage and R&D investment on a uniquely reliable platform strategically positioned for mobile and cloud technologies into the future.”

Read the Brochure

Alexandre Codinach

Alexandre Codinach

VP Sales and Operations Americas

Alexandre Codinach has 30 years of IBM i experience, both technical and managerial, with specialized expertise in the field of IBM i modernization.  With a 360 degree view of IBM i, Alexandre has excelled in many roles, including application architecture, project management, pre-sales and consulting.  As ARCAD COO, his in-depth knowledge of IBM i technology and ability to coordinate large, complex IBM i projects on an international scale have made him a trusted advisor in the rollout of ARCAD’s “Modernization as a Service” projects worldwide.

2018-12-06T09:47:03+00:00 Blog|

Test Automation and Source Code Analysis for IBM i: why bother enforcing a new quality gate?

By Nick Blamey | November 27th 2018

Quality Gate

What’s in it for the developers and why is it needed for DevOps – a thought inspiring blog by ARCAD’s Director of Northern European operations, Nick Blamey

The business problem:  Continuous Quality of applications depends on implementing policies that enforce immediate validation.  But CIO’s responsible for diverse application assets are lacking code guidelines from where to start measuring code quality and also the resources to allocate on this activity.

Continuous Quality (CQ) and DevOps

Software defects drastically increase the cost of application development. Finding and fixing errors in production is often 100 times more expensive than finding and fixing it during the design and coding phases.  It is vital that teams incorporate quality into all phases of software development and automate quality verification as far as possible to locate defects early in the process and avoid repeat effort.  This is what is meant by “Continuous Quality” or CQ, which forms an essential safeguard – or quality gate – in the rapid delivery cycles in DevOps and CI/CD workflows today.

Which techniques are available for Continuous Quality?

Static code analysis is the simplest and most effective method to prevent defects and harden code while accelerating application delivery.

Automating the code analysis as early as the build or Continuous Integration phase means your team can find and fix systemic defects when the cost of remediation is at its lowest.  After the initial investment in configuring rules and metrics the gains in efficiency become exponential over the development lifecycle.

To achieve continuous quality, organizations can employ a number of strategies.

  • Static analysis of source code to identify complexity hotspots, deviations from standards, security loopholes, etc.
  • Peer reviews to check code written by one’s equals (peers) to ensure it meets specific criteria.
  • Unit testing to execute and scrutinize the individual modules, or units, of an application, for proper operation.
  • Regression testing to repeat a set of test scenarios on a new application release to identify any deviations from normal operation.
  • Monitoring of the application in production to ensure it is operating correctly after each update and at all times.

To be effective in a DevOps environment each of the above techniques must be both automated and continuous, integrated within the CI/CQ/CD workflow.

How important is Source Code Analysis (SCA) for DevOps?

Source code Analysis:  what alternatives?

Faced with the challenge of auditing an existing IBM i code base for quality, CIOs have a limited number of choices:

  • Complex peer review process: even supported by collaborative tooling, the manual effort involved in a peer review can be difficult to manage across large teams with a range of expertise.
  • External audit of source code by experts: lacking in application knowledge, the learning curve for external code auditors is often steep, making this an expensive option with often unquantifiable benefits.
  • Continuous source code analysis using an automated solution designed specifically for an IBM i RPG code base

The cost of defects in application development

A study in software quality from Capers Jones in 2008 came to two very important conclusions:

  • Development “wastage” and defect repairs (from deployed code) absorbed almost two-thirds of the US software workforce – leaving only one third for productive work on new projects.
  • 50% of software development project budgets are spent fixing poor quality code; fewer than 6% of organizations have clearly defined software management processes in place; and software projects of 100,000 function points in size have a failure rate of 65%.

More recent articles on this topic suggest that for many organisations the statistics remain very much unchanged today.

Since DevOps has now taken over as the key driver in most development shops there is a massive potential for optimisation to eliminate the challenges described above and make developers more efficient in their work by spending more time coding and less time fixing defects.

The limitations of functional testing

Most organizations perform functional testing of their applications through the UI, required for compliance reasons.  However, like black box testing, since each and every defect needs to be diagnosed from the UI, this approach brings little information to help developers to actually fix problems.  The result is typically a constant stream of defects classified as:

  • cannot reproduce the issue
  • test environment not set up correctly
  • require more information

With poor information for developers, the challenges are pushed downstream.  Projects face delays due to lengthy code understanding and a sub-optimal debugging approach. This severely limits the ability for any IBM i development team to maintain the “speed of delivery” required for meeting their DevOps targets.

Static Code Analysis: Who does it and why

There are 3 main approaches to static code analysis in the multi-platform world: Static Analysis for Security, Static analysis for Code Complexity and Static Analysis for Code Quality.

Many products exist to perform this task and the market is large and expanding with a few dominant players. The solutions are often extremely expensive and tend to be less relevant on the IBM i which is less susceptible to security issues than other platforms.

IBM AppScan Source is the best example of the market leader for Code Security but MicroFocus also offer the Fortify Security Suite with a number of additional tools available from other vendors e.g. CheckMarx, Klocwork, CA Veracode.

For Code Complexity metrics, the key players include CAST and McCabe but neither offer support for RPG on the IBM i.

Why do IBM i Developers need Source Code Analysis?

Given the multiple variants of RPG and the sheer longevity of applications, developers on IBM i face a unique challenge with legacy code bases containing millions of lines of code that have been maintained for sometimes thirty years by successive developers.  It is laborious to understand program logic and assess the quality of code – resources are diverted to address the “technical debt” of the code base.  The challenge is greater still given the ever-growing shortage of RPG skills in the market.  The new Free Form RPG syntax has changed the game, offering a means of onboarding a new generation of developers – making the conversion of RPGLE applications to Free Form the “burning platform” of our day.

Source Code Analysis as key part of any “legacy code base Audit”

Source Code Analysis has the potential to be delivered as part of a wider code audit process.  Companies like ARCAD have built solutions that generate a complete metadata overview of the entire code base, enabling a deeper level of analysis and integrity checking.  Here source code analysis is delivered as part of the code audit and rules and metrics are used to enforce local standards.

ARCAD CodeChecker can create a Code Quality Baseline from which the RPG Code Base can be continually improved through accurate and regular measurement of code quality. This allows CIOs and Development leads to show application owners that they are consistently delivering against ISO 27001 continual improvement goals of the wider organisation.

Widen your net to catch Code Quality issues for RPG

As described above, RPG is a special case in the development world.  Among the standard source code analysis tools, a few (such as SonarCube) are able to perform a simple RPG Code Review and Static Quality Analysis, but they are severely limited in their coverage (for example, lacking support for the many RPG variants) and the number of rules they can enforce (limited to around 30 mainly code documentation guidelines).

The potential business risk of these limited tools is that:

  • They are not really usable for code quality guideline enforcement for RPG specifically
  • They tend to create a number of false positives which limits their effectiveness and could in theory eliminate any value introduced in the Peer Review process by forcing developers to debug issues which are caused by the tool itself.

Modern DevOps organizations are now looking for an “industrial strength” solution to this challenge to ensure that an implementation an open source Source Code Analysis tool doesn’t itself become a bottleneck in the DevOps workflow.

The design goals of ARCAD Code Checker from ARCAD have been therefore to fit with modern large DevOps oriented IBM i RPG development teams, emphasizing:

  • Rapid scan of an entire code base
  • Auto-Tune the Quality rules for enforcement on a code base by code base, stage by stage basis
  • Provide real value to the developers performing the coding work in rapid feedback to the standard of their work after each and every edit. (see section below)
  • Seamlessly integrate into a wider ARCAD DevOps tools chain offering, Rapid and complete x-referencing and Auditing, Source Code Management, Dependency Building, Automatic Testing (with deep dive diagnostics of errors) and Deployment and Release Automation for IBM i LPAR environment management.

RAPID VALUE from ARCAD CodeChecker for your CodeBase

Productivity gains through Source Code Analysis Quality Gate enforcement

Typically, if a developer knows within a few minutes of writing code that a guideline has been breached with a desktop code checking product like ARCAD CodeChecker, they can fix issues immediately with minimal impact on quality. If Code is “peer reviewed” a developer could be waiting for days even weeks for feedback by which time the developer has moved to other tasks. The best analogy for this would be a grammar checker with word processing i.e. if you know immediately as you write text that you have made a grammar error, you can fix it immediately whilst the sentence is still in your mind – compared with the time it takes if you perform a grammar check 2 weeks after you wrote the sentence. In this case of course the writer of the text will spend 90% of the grammar check and correction time trying to understand the context of what he/she wrote 2 weeks before.

Driving enforcement of Code Quality by offering real and immediate value to Developers

Many Source Code Analysis tools have a bad reputation. CIOs and Development management must constantly make risk management decisions between: slowing down the development process and the resultant business owner pressure to maintain the speed of delivery vs. introduction of technical debt which could cause a large business risk in the future if Code Quality Guidelines are not enforced.

Code Checker from ARCAD has been designed to add immediate value right at the developer’s workspace / desktop through its integration with RDi and SEU. ARCAD designed Code Checker to cope with the regular comments from RPG developers “If you are going to Mark my homework, at least time me how you are going to judge my success of failure”.

In addition to the optimisation of the Peer Review process through Automatic Code Quality Static Analysis, ARCAD CodeChecker can offer value to developers and your inflight projects:

  • Creation and enforcement of Source Code Quality guidelines delivered automatically eliminates the need for Peer Review: allowing developers to focus on Coding more quickly vs peer review.
  • Putting in a process through Source Code Analysis for the elimination of technical debt means that IBM i RPG team can stay up to speed with other teams within organisation in a DevOps world.

Combining Source Code Analysis with Testing Automation, Database integrity checking and helping Developers to debug complex issues more rapidly.

Modern Development Teams face this problem

DevOps Bottleneck effect from manual Source Code Analysis, Testing, and Debugging, re-test cycle

DevOps Bottleneck effect from manual Source Code Analysis, Testing, and Debugging, re-test cycle.

To cope with an acceleration of the DevOps cycles from a few releases per year to more regular releases i.e. monthly or even weekly releases, organizations are driven to perform more regular testing, normally delivered through automation.  They are also impelled to eliminate a lot of the effort which goes into the “localisation of defects” to their root cause, to ensure that developers can drive higher quality code without an impact on the timeframes required by application owners.

Shift left as Key Driver for DevOps:

The graph below shows a typical IBM i / RPG Defect graph i.e. the amount of defects which occur over the time from the start of the project to the actual release date.

Cost of defects across the development lifecycle

Cost of defects across the development lifecycle

Though typically 85% of defects are introduced in the early coding phases of the DevOps cycle, the cost to repair defects grows exponentially through the later phases of test and delivery, reaching inestimably high costs when a defect is found in production, with potentially a significant impact on business bottom line and reputation.

It is clear to see that by “shifting left” the detection of defects, their cost and impact is minimized.

ARCAD Software, through their work with many development teams on RPG have seen that developers perform a number of tasks to advance the detection of errors. These include:

  • Hand Coding unit tests themselves to exercise individual program functionality and make sure that they haven’t created defects as they develop.
  • Testing individual Batch processes are still working after any changes are made to specific programs.
  • Re-setting and working with complex Test Data including Anonymisation requirements.
  • X referencing defects across a multitude of components / programs to understand the impact of each code change across other RPG programs and also NON -IBM i x references.
  • Scripting the deployment of new code once compiled onto the different LPARS (dev, QA, Prod etc.) and then performing a manual check that once deployed each of the LPARS is fully functioning. This process is typically referred to as “test environment assurance”.
  • Preparing the LPAR for a full end to end test execution including load testing and end to end functional testing.

Yet from ARCAD’s experience, each of these processes when performed manually causes additional cost, effort and risk of bottleneck in a DevOps deployment process.

To cope with these challenges, ARCAD offer a number of tools in addition to CodeChecker (Source code analysis) to eliminate bottlenecks in the DevOps process and provide a frictionless process from Functional Specification through to Coding, Unit Testing, Compilation, Build, End to end Functional Test and Production deployment :

  • ARCAD Verifier (BATCH and UI Testing)
  • ARCAD DOT Anonymizer,
  • ARCAD Observer for X-referencing and
  • ARCAD Builder and Drops for deployment automation and Release Management.

Each of these solutions can add additional value to your Development Process, shifting left to reduce overall cost in the development cycle:

Contribution of ARCAD solutions to a “shift left” of development costs

Contribution of ARCAD solutions to a “shift left” of development costs

SpareBank Success Story

SpareBank reduced costs of environment management & compliance by 70%

Case Study

For example, SpareBank1 is one of ARCAD’s leading customers and managed to achieve a 70% elimination of Test Environment Assurance effort through the use of the ARCAD suite

Read the Story

ARCAD view and positioning

As a company ARCAD began its evolution fixing the Source Code analysis problem of Year 2000 date format changes. Since then ARCAD have provided solutions to the most burning and current challenges our 350+ customers face with their RPG code bases: X referencing, Auditing, Source Code Management, Building, Testing and Deploying.

ARCAD for DevOps: suite of solutions integrated over a repository core

ARCAD for DevOps:  suite of solutions integrated over a repository core

ARCAD Steps

Suggested next step:

An Audit process using ARCAD expertise and tooling is an excellent starting point to your journey to a quality DevOps process.

To find out more about how ARCAD have designed their solutions to fix the next problem in Source Code Analysis. Contact ARCAD and see how CodeChecker and other ARCAD DevOps tools for IBM i can help with your CodeReview, Audit, Testing and DevOps processes.

Nick Blamey

Nick Blamey

ARCAD’s Director of Northern European operations

Nick Blamey joined ARCAD from IBM where he was responsible for DevOps and Rational solutions in various roles across Europe. Previously Nick worked for other software development tools orgainsations including: HP / MicroFocus, Fortify Software (acquired by HP), Empirix (acquired by Oracle), Radview and Segue (now Microfocus). Nick is a thought leader in the areas of Static Code Analysis, Testing Automation, DevOps and Shift-Left strategies.

2018-11-29T12:00:30+00:00 Blog|

ARCAD Software launches “Pay-per-Use” system for DROPS, their flagship Application Release Orchestration (ARO) solution

ARCAD Software launches “Pay-per-Use” system for DROPS, their flagship Application Release Orchestration (ARO) solution

Annecy, France and Peterborough, NH, USA – 12 November 2018 – ARCAD Software, market leader in Enterprise DevOps and Modernization solutions, today announced the launch of a new Pay-per-Use pricing system for DROPS, their flagship Application Release Orchestration solution.

(more…)

2018-11-14T17:29:29+00:00 Press Articles|

Anonymize your test data to prevent a data breach

Anonymize your test data to prevent a data breach

In our previous webinar, we covered how Test Automation is an integral component of the DevOps and agile methodologies. Yet for testing to be effective, you need realistic test data available. A central issue is that this data often comes from production.

This puts development shops particularly at risk of a data breach.

How to eliminate risk and maintain test quality?  Integrate data masking into the heart of your DevOps cycle.

Our Webinar will demonstrate how easy it is to implement high performance data anonymization across any DBMS.

Watch Now!

Watch the replay

(more…)

Getting Progressive About Regression Testing

Getting Progressive About Regression Testing

If you want to employ modern software development and testing techniques, you have to move on from simple unit testing by developers and implement regression testing in your quality assurance (QA) organization. This is perhaps the best way to take the risk out of continuous development – something that companies have to embrace if they are to remain competitive.

The difference between regression testing and normal testing is that in the most common model, the developer has a request to fix a problem or to add a feature, and they make their changes and do unit testing, where they come up with test cases that will test the problem or the feature before they pass it off to QA to essentially run the same tests. Developers make a change and they know what result they are supposed to get back. If you add 2 plus 2, you know you are supposed to get 4. If 2 plus 2 equals four, then unit testing is successful, whereas regression testing is testing all of the functionality, and this is much broader. Unit testing is not looking for broader impacts.

Read the whole Article
2018-10-26T09:38:08+00:00 Press Articles|

The rise of Enterprise DevOps: solving the IT silo challenge

By Olenka Van Schendel | October 23rd 2018

Silos

In 2018 the enterprise IT silo problem still persists.  The disconnect between Digital initiatives and Legacy development continues to drain IT budgets and increases the risk of side-effects in production.  Errors detected at this point have a direct business impact: the average cost of a major incident in a strategic software application in production per hour is 1 M$, that’s tenfold the average cost of a hardware malfunction per hour (*).  And it’s estimated that 70% of errors in production are simply due to deployment errors, and only 30% due to faulty code.  Yet those responsible for today’s diverse IT cultures are lacking visibility and control over the software release process.

What solutions are emerging?  Since the last Gartner Symposium, we are seeing Release Management technologies and DevOps converge.  Enterprise DevOps is coming of age.

As a mainstream movement, the DevOps community is assuming the operational reponsibility that comes with success. The agility of “Dev” tackles the constraints and corporate policies familiar to “Ops”.

From CI/CD to Enterprise DevOps

IT environments today are comprised of of a complex mixture of applications each one made up of potentially hundreds of microservices, containers, and multiple development technologies – including legacy platforms that have proven so reliable and valuable to the business that even in 2018 they still form the core of many of the world’s largest business applications today.

Many CI/CD pipelines have done a fair job in provisioning, environment configuration, and automating the deployment of applications. But they have so far failed in giving the business the answers to enterprise-level questions around business the answers to enterprise-level challenges around new regulations compliance, corporate governance and evolving security needs.
What are called DevOps pipelines today are often custom-scripted and fragile chains of disparate tools. Designed primarily for cloud-native environments, they have successfully automated a repeatable process for getting applications running, tested and delivered.
But most are lacking the technology layer needed to manage legacy platforms like IBM i (aka iSeries, AS/400) and mainframe z/OS, leaving a “weak link” in the delivery process.  This siloed approach to DevOps tooling carries the business risk of production downtime and uncontrolled cost.

Solutions are emerging. Listen to SpareBank1‘s experience for a recent example. The next phase in release management is already with us. Enterprise DevOps offers a single, common software delivery pipeline across all IT development cultures and end-to-end transparency on release status.  This blog explains how we got here.

What has been holding DevOps back? Bimodal IT holds the key.

The last few years have seen the emergence of “Bimodal IT“, an IT management practice recognizing two types – and speeds – of software development, and prescribing separate but coordinated processes for each.
Gartner Research defines Bimodal IT as “the practice of managing two separate but coherent styles of work: one focused on predictability; the other on exploration”.
In practice, this calls for two parallel tracks, one supporting rapid application development for digital innovation projects, alongside another, slower track for ongoing application maintenance on core software assets.

Bimodal IT

According to Gartner, IT work styles fall into two modes. Bimodal Mode 1 is optimized for areas that are more predictable and well-understood. It focuses on exploiting what is known, while renovating the legacy environment into a state that is fit for a digital world. Mode 2 is exploratory, experimenting to solve new problems and optimized for areas of uncertainty. These initiatives often begin with a hypothesis that is tested and adapted during a process involving short iterations, potentially adopting a minimum viable product (MVP) approach. Both modes are essential in an enterprise to create substantial value and drive significant organizational change, and neither is static. Combining a more predictable evolution of products and technologies (Mode 1) with the new and innovative (Mode 2) is the essence of an enterprise bimodal capability. Both play an essential role in the digital transformation.
Legacy systems like IBM i and z/OS often fall into the Mode 1 category. New developments on Windows, Unix and Linux typically fall into Mode 2.

The limits of CI/CD

Seamless software delivery is a primary business goal. The IT industry has made leaps and bounds in this direction with the widespread adoption of automated Continuous Integration/Continuous Delivery (CI/CD). But let us be clear about what CI/CD is and what it is not.
Continuous Integration (CI) is set of development practices driving teams to implement small changes and check in code to shared repositories frequently. CI starts at the end of the code phase and requires developers to integrate code into the repository several times a day. Each checkin is then verified by an automated build and test, allowing teams to detect and correct problems early.
Continuous Delivery (CD) picks up where CI ends and spans the provision-test-environment, deploy-to-test, acceptance-test and deploy-to-production phases of the SDLC.
Continuous Deployment extends continuous delivery: every change that passes the automated tests is deployed to production automatically. By the law of DevOps, continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.
The issue is that most CI/CD pipelines are limited in their use to the cloud-native, so-called new technology side of the enterprise. Enterprises today are awaiting the next evolution, one of a common, shared pipeline across all technology cultures. To achieve this, many organizations need to progress from a simple automation to business release coordination, or orchestration.

DevOps facts & Predictions Infographic

DevOps Facts & Predictions

Infographics

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Discover the Infographic

From Application Release Automation (ARA) to Orchestration (ARO)

Application release automation (ARA) involves packaging and deploying an application/update/release from development, across various environments, and ultimately to production. ARA tools combine the capabilities of deployment automation, environment management and modeling.
By 2020 Gartner predicts that over 50% of global enterprises will have implemented at least one application release automation solution, up from less than 15% in 2017. Approximately seven years old, the ARA solution market reached an estimated $228.2 million in 2016, up 31.4% from $173.6 million in 2015. The market is continuing to grow at an estimated 20% compound annual growth rate (CAGR) through 2020.
The ARA market is evolving fast in response to growing enterprise requirements to both scale DevOps initiatives and improve release management agility across multiple cultures, processes and generations of technology. We are seeing ARA morph into a new discipline, Application Release Orchestration (ARO).
One layer over ARA, Application Release Orchestration (ARO) tools arrange and coordinate automated tasks into a consolidated release management workflow. They further best practices by moving application-related artifacts, applications, configurations and even data together across the application life cycle process. ARO spans cross-pipeline software delivery and provides visibility across the entire software release process.
ARO forms the cornerstone of Enterprise DevOps.

Enterprise DevOps: Scaling Release Quality and Velocity

Enterprise DevOps is still new, and competing definitions are appearing. Think of it as DevOps at Scale.
Like Bimodal IT, large enterprises use DevOps teams to build and deploy software through individual, parallel pipelines. Pipelines flow continuously from development to integration and deployment iteratively. Each parallel pipeline use toolchains to automate or orchestrate the phases and sub-phases of the Enterprise DevOps SDLC.
At a high level the phases in the Enterprise DevOps SDLC can be summarized as plan, analyze, design, code, commit, unit-test, integration-test, functional-test, deploy-to-test, acceptance-test, deploy-to-production, operate, user-feedback.
The phases and tasks of the ED-SDLC can differ within each pipeline or there can be a different level of emphasis on each phase or sub-phase. For example, in bimodal mode 1 on a SOR the plan, analyze & design phases may be of greater importance than in bimodal level 2. In bimodal mode 2 on a SOE the frequency of the commit, unit test, integration test and functional test may be emphasized.
Risk of deployment error is high in enterprise environments because toolchains in each pipeline differ, and dependencies exist between artifacts in distinct pipelines. Orchestration is required to coordinate the processes across the pipelines. Orchestration equates to a more sophisticated automation, with some built in intelligence and an ultimate goal to be autonomic.

How to transition Legacy systems to DevOps?

In response to the challenges of Bimodal IT, we have reached a point where classic DevOps and Release Management disciplines converge.
For over 25 years Arcad Software has been helping large enterprises and SMEs improve software development through advanced tools and innovative new techniques. During this time, we have developed deep expertise in legacy IBM i and z/OS systems. Today we are recognized by Gartner Research as a significant player in the Enterprise DevOps and ARO space for both legacy and modern platforms.
Many ARO vendors assume greenfield developments on Windows, Unix and Linux and hence legacy systems become an after-thought. ARCAD is different; we understand the need to get the most from your companies’ investment in legacy systems over the past decades, and also the demands and challenges of unlocking the value within these legacy applications.  ARCAD can ensure you can offer your application owners and stakeholders a practical, inclusive step-by-step solution to deliver both DevOps and ARO for both new and legacy applications vs. an expensive and risky rip-and-replace project.

Leveraging existing CI/CD pipelines

There are a huge number of tools available to organisations to deliver DevOps today. Tools overlap and the danger is “toolchain sprawl”. Yet no one tool can address all needs in a modern development environment. It is essential therefore that all selected tools can easily integrate with each other.
The ARCAD for DevOps solution has an open design and integrates easily with standard tools such as Git, Jenkins, JIRA, ServiceNow. It is capable of orchestrating the delivery of all enterprise application assets, from the most recent cloud-native technologies to the core legacy code that underpins your business.

ARCAD has a proven methodology to ensure we leverage the value in your Legacy applications and avoid a rip-and-replace approach.  ARCAD solutions extend and scale your existing DevOps pipeline into a frictionless workflow that supports ALL the platforms in your business.

Modernizing your IT assets

If the future of legacy application assets is your concern, then complementary ARCAD solutions can automate the modernization of your legacy databases and code – increasing their flexibility in modern IT architectures, and making it easy to hire younger development talent and ensure the new hires can collaborate efficiently with older legacy team members.

With 25 years of Release Management experience working with the largest and most respected Legacy and Digital IT Teams across the globe, ARCAD Software has built in security, compliance and risk minimization into all of our offerings. This is exactly the place that DevOps is headed.

(*) Source: IDC

White Paper Enterprise DevOps

Enterprise DevOps White Paper

This paper attempts to debunk competing DevOps concepts, terminologies and myths in order to help make the path forward clearer and more practical.

Download the White Paper

SpareBank1 Case Study

Success Story SpareBank1 ARCAD for DevOps

SpareBank1 drives rapid development cycles on the IBM i, reducing costs of environment management & compliance by 70%

Read the story

Olenka Van Schendel

Olenka Van Schendel

VP Strategic Marketing & Business Development

With 28 years of IT experience in both distributed systems and IBM i, Olenka started out in the Artificial Intelligence domain and natural language processing, working as software engineer developing principally on UNIX. She soon specialized in the development of integrated software tooling including compilers, debuggers and source code management systems. As VP Business Development in the ARCAD Software group, she continues her focus on Application Lifecycle Management (ALM) and DevOps tooling with a multi-platform perspective including IBM i.

2018-12-17T11:15:12+00:00 Blog|

The evolution of DevOps

By Marc Dallas | October 9th 2018

DevOps practices have evolved in recent years in many organizations seeking to respond more effectively to their business challenges.
While DevOps previously focused primarily on IT services, it now extends across the entire enterprise, impacting processes and data flows and driving deep organizational changes.

DevOps, above all a management of change

Organizations that have embraced DevOps either to the full or even partially can already testify that this approach carries a significant ROI.
Many others have explored and come close to DevOps but have not yet taken the final step.
The main reason for latency is that a DevOps transition goes beyond the adoption of new tooling and into people and process, and most importantly it requires a careful management of change.
Indeed, DevOps is not just about choosing the right automation solution. It requires an accompanied transition, wherein lies the role and responsibility of the solution vendor. In a DevOps project, levels of maturity and understanding differ between organizations. A DevOps solution provider therefore has a duty to advise and support in the management of change and should add value to the project beyond a simple automation. Company specifics must be taken into account, and in particular, the scope and diversity of development cultures and technology platforms contained in the application portfolio. Without this, a DevOps project has no chance of success.

The emergence of DevSecOps and BizOps

The emergence of these new terms is directly related to the “complicated” relationship between Development and Operations.
Over a decade ago, development teams had already adopted mainstream agile methods and were releasing smaller software increments faster and more frequently, while operations – upholding their corporate constraints around application availability and compliance – became an apparent bottleneck in the process. To keep software development cycles fluid and deliver updates to the end-user at the speed of the business, operations had to follow this same agile trend.

The DevOps movement held the key. By enhancing communication, in such a way as to recognize and respect the constraints of each different department, we have transitioned into a dialogue, exchange and a set of processes that meet the needs of each profession and integrate their respective constraints in order to collaborative effectively. This is the essence of what is meant by DevOps.
The appearance of these new and related terms DevSecOps and BizOps is simply evidence of the extension of this level of communication to all departments in a company, a progression in business change.

DevSecOps, for example, aims to enhance security by integrating it early in the application development process. We could add other departments into the chain.
Above all, this means that today companies are realizing that there is a need to have a wider software supply chain which, at each link in the chain, integrates the same principles exemplified by DevOps.

BizOps is a more generic term. It describes an extended chain between business and operations. There is a contraction that we could finally call “BizDevSecOps”.
BizOps involves strategic and operational management. Indeed we should extend the term further than Ops today, as far as users (BizUsers).
We are reminded of terms like as BtoB or BtoC, except for the fact that with DevSecOps and BizOps we embark on a change in internal organization, necessary for the company to thrive. We retain a level of granularity in tasks to allow focus on solving problems in a particular area. It’s about defining and executing all the required actions and automating them in a continuous delivery environment.
This is the idea behind Release Coordination, right the way from the business strategy to the provision of new releases to the end-user.

DevOps facts & Predictions Infographic

DevOps Facts & Predictions

Infographics

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Discover the Infographic

The challenges of Enterprise DevOps

The concept of Enterprise DevOps elevates DevOps into a business strategy, a process that adds value to the organization, not just IT.
The issues in terms of identification, validation of releases between different services, causes of bottlenecks, decision times, implementation or delivery durations, if they are examined at a DevOps scale, can be an area for experimentation. We will then be able to extend this inter-department cooperation across to the entire company, which will allow de facto to increase the overall Return on Investment.
And this is the challenge of Enterprise DevOps: that the entire company becomes aware of the added value brought by this change of collaboration between services.
All this microscopically managed work between Dev and Ops will then be implemented on a macroscopic scale across the entire enterprise chain (from the strategic decision to the end user).

The question of DevOps for Database

Although it is not new, the consideration of data in DevOps is gaining momentum.
In order to save time and reduce development effort, the concept of parameterizing data (whatever the data type, structure and the underlying data management technology) was introduced in order to modify program behaviour depending on specific data entered.
Parameter data therefore has an impact on the behavior of program execution. As such, these data actually belong to the field of development and operation of the application.

Generally, as the data volume remains low, typically very basic processes are used for the transfer of parameter data to production.
These elementary processes therefore do not usually cater for the rollback of data, or the identification of the version number of the installed system – capabilities that are considered low priority as the volume of data is relatively small.
Yet the critical nature of parameter data makes these processes in reality very important.
By underestimating their importance, we introduce a weak link in the quality chain, and run the risk of an incident in production that can cause huge financial losses, but also a loss of confidence in the deployment process.
It is therefore vital to not focus solely on the frequency and scale of deployment, but also on the criticality of the data that is being deployed.
Parameter or configuration/settings data must follow the same quality chain as the applications themselves, as is the promise of “DevOps for Database“.

Conclusion

  • DevOps is not just about process automation, it involves a true management of change
  • The terms DevSecOps and BizOps reveal that companies now contend the need to have an enterprise-wide software supply chain
  • The added value of inter-department collaboration is realized across the wider enterprise
  • Often critical data must follow the same quality chain as the applications

White Paper « DROPS for DevOps »

DROPS for DevOps White Paper

This White Paper describes the oppportunity, the challenges and the solutions offered by DROPS as you rollout a DevOps strategy in your multi-platform environments.

Download the White Paper

Systeme U Case Study

Système U Success story

Systeme U cuts application deployment costs by 40% using DROPS on IBM i & Linux

Read the story

Marc Dallas

Marc Dallas

R&D Director

Software Engineering degree from the Integral International Institute, Marc started his career in 1994 as Analyst Programmer at Nestle Cereal Partners, and was appointed Product Manager at ADSM Software, prior to joining ARCAD Software in 1997.

2018-11-28T13:03:37+00:00 Blog|

Continuous Testing (CT) in your DevOps Strategy

Continuous Testing (CT) in your DevOps Strategy

As DevOps drives faster and more frequent software delivery, the greater the pressure on testing staff. Each update needs to be regression-tested to avoid the risk of downtime. At this rate of change, manual testing becomes a bottleneck, and is often the first task to be sidelined.

If you are testing manually, watch our Webinar to learn how to automate the process of Continuous Test, to catch errors as early as possible in the cycle :

  • Increase your team’s productivity
  • Shorten time to delivery
  • Increase application reliability in production
  • Reduce IT costs

We will demonstrate how easy it is to record test scenarios from your 5250, client/server and web interfaces. Learn how to automatically replay all scenarios impacted by a software change, and quickly identify errors via graphical reports.

Whether you have 2 Testers or 40 Business Analysts performing regression testing, watch it now!

Watch the Replay

(more…)

2018-10-26T15:07:58+00:00 On-demand Webinars|

5 most common questions about data anonymization

by Maurice Marrel | september 13th 2018

GDPR, data privacy, data protection regulations have raised more questions around the handling of data than ever before. We asked our DPO and anonymization expert, Maurice Marrel, to answer some of the most common questions facing our customers today.

1. What is the role of anonymization in GDPR compliance?

In recent years, “digital everywhere” has dramatically transformed the flow of data.
Production data is copied into test, QA or pre-production environments, and exposed to the eyes of testers, receivers or unauthorized developers on machines much less protected than production environments.
Many files are also shared with external partners, who often only require a small part of the data actually transferred.

This personal data must be protected from leaks and other indiscretions.
In response, specific new legislation has emerged, such as the GDPR in Europe.

These new regulations oblige the desensitization of confidential data.
Desensitization means transforming the data, using non-reversible algorithms.
However, the data must remain usable. A test user must still see on the screen, in the last name field, a modified last name that “looks like” a last name.
Similarly, the domain must remain the same: an IBAN / RIB or a social security number must stay valid and compatible with the requirements and validation checks made by applications to allow the tests to actually run.
These same constraints must still apply even in the case of data redundancy in legacy databases, or across multiple database management systems.
These concerns must all be taken into account by any anonymization solution.

2. Anonymization and pseudonymization – how do they differ?

Anonymisation ensures that the data can never be retrieved by any means, contrary to pseudonymization.

In a test environment, even if the machines are secure, it is the developers, testers, QA staff, and training personnel who have direct access to the data. It is therefore imperative to anonymize or pseudonymize the data upstream.
In the case of a pseudonymization, the data can optionally be kept encrypted in software metadata, so it can be retrieved individually on request, and only to authorized persons. The old data in this case are preserved. This can be useful for example to check specific, one-off problems in a test environment.

Pseudonymization is often the only solution that allows normal operation of applications and the completeness of test scenarios.
On the other hand, it is a potentially reversible technique due to the identification keys that may not be replaceable for technical reasons. Pseudonymization can leave identifiable data in place, such as customer numbers, which are sometimes the only link between data storage technologies (DBMS, files). Combining the data with each other can help malicious organizations statistically guess some of the original data.

3. Personal vs. sensitive data – what does this change for data handling?

According to the CNIL, personal data is “any information relating to a natural person who can be identified, directly or indirectly”. Whereas sensitive data refers to “any information that reveals racial or ethnic origins, political, philosophical or religious opinions, trade union membership, health or sexual orientation of a natural person”.

But this differentiation of data can be confusing.
The most important point is to identify the data to be anonymized. The goal is to prevent anyone being able to find links between these data. For example, you are unable to modify health status type data if the corresponding first and last names are anonymized.
For example, you are unable to modify health status type data if the corresponding first and last names are anonymized.

Anonymization therefore utilizes algorithms that apply to all types of data.

4. How can I safeguard IT performance when introducing anonymization?

It is important to not only consider performance alone, but also take security into account.
Anonymization means an additional process, and will therefore necessarily have an impact on performance. However, if it is well planned for, and its scope and requirements are well defined, any impact will be minimized. And on average, only about twenty percent of data needs to be anonymized.

In general, data when anonymized, will be retrieved directly from a production environment for insertion into a test environment. But even if users (developers, testers etc.) do not have access during processing, test environments are usually less protected.
The ideal solution, in this case, will be to make a copy of the production database. This will allow the first instance to remain available while the other is being anonymized.
The anonymized data will then be dispatched to the relevant test, QA and training environments.
Another solution is to isolate a copy of the production environments in test machines while limiting access during the anonymization, then distribute onto the test environment.

5. How can I identify which data should be anonymized?

Typically, anonymization is required for test environments.
A good knowledge of the overall scope of the database is important, because it will help in assessing which types of data will need to be anonymized.
It is also important to consider how specific data relate to each other, as some data are inseparable.
To assist the administrator, the discovery of the data eligible for anonymization must be as automated as possible, using algorithms catering for the various types of data.

But in some cases, anonymization is needed for production environments. This is especially the case with the “right to be forgotten“, which has been considerably reinforced by the GDPR.
Indeed, anyone residing in the European Union and whose organization holds personal data may take control over his/her data.
But in many cases, simply deleting this data would have a significant impact on other data. In such cases anonymization is therefore a better solution as it renders personal data inaccessible, while preserving the usability of data to allow normal application operation and consistency of results.
Take the example of an online commerce site. When a product is sold, out-of-stock, money-in, or parcel-delivery data are necessary for the the business to operate and cannot be removed. However, the name of the buyer, his address or banking data can be.
The right to be forgotten, whether it results from a specific request or a regulation on the conservation of historical data, is the most common reason for anonymizing a production environment.

Conclusions

  • Anonymization meets the requirements of the GDPR because it transforms data irreversibly, while retaining its usability
  • Anonymization concerns all data, personal or sensitive
  • If the anonymization scope and requirements are well defined and planned ahead, any impact on performance will be minimized
  • Anonymization may be necessary in a production environment in response to “right to be forgotten” requirements

White Paper « Protection of personal data »

Protection of personnal Data - White Paper thumbnail

This document details the fundamentals of the GDPR, and recommendations as to how to become compliant before the 2018 deadline.

Download the White Paper

DOT Anonymizer Datasheet

DOT-Anonymizer Datatsheet

The anonymization of personal data is an ever-sensitive subject. This document will show you how DOT Anonymizer helps you protect the confidentiality of your test data.

Download the Datasheet

Maurice Marrel

Maurice Marrel

Senior Solutions Consultant, DOT Software

Maurice Marrel has over 20 years experience on IBM i (and its predecessors) remaining actively involved in modernization projects at the forefront of technology on the platform. Now specializing in technical pre-sales and training for ARCAD’s solutions for Enterprise Modernization on IBM i, Maurice has a wide-ranging technical background including IT management in aerospace and energy industries, and project leadership in several technology sectors including software development tooling.

2018-11-29T18:42:03+00:00 Blog|

ARCAD releases new CodeChecker module to guarantee software quality and reduce DevOps risk

ARCAD releases new CodeChecker module to guarantee software quality and reduce DevOps risk

Peterborough, NH and Annecy, France – 16 September 2018 – ARCAD Software, leading vendor in DevOps and Enterprise Modernization solutions for IBM i, today announced the release of a new module in its DevOps suite: ARCAD-CodeChecker, for continuous source code quality analysis.

(more…)

2018-10-02T11:34:27+00:00 Press Articles|

3 steps to zero-risk Modernization on IBM i

3 steps to zero-risk Modernization on IBM i

Starting a modernization project on IBM i can be a daunting prospect, faced with the many options out there: Webservices? N-tier? Web, mobile? Java, .NET?

Join us for our three-part Modernization Webinar Series on September 18th, 25th and 27th with Barbara Morris, Scott Forstie and Tim Rowe, and learn how to get from A-Z with minimum risk !

Featuring actual case studies, our series is structured around a 3-step approach to risk-free modernization :

  • Step 1: Analyze – Where do I start to modernize? What are my choices?
  • Step 2: Structure – Laying a secure foundation with a structured DevOps process
  • Step 3: Transform – Automating the conversion of RPG source code, database and UI

Watch the Replay

Our special Guests

1st Part – Tim Rowe takes a tour of the very latest technology options on IBM i, with the goal of “Making IBM i normal !”.  Tim guides us into making the right choice of development language, database, method and tooling using the “best tool for the job”, taking performance and data integration into account.  Assess the use of open source tools like Git and Jenkins in an enterprise DevOps setting.   Learn the latest in connectors including MQ, JDBC, ODBC, REST and SQL Services…  A round trip of the “art of the possible” on IBM i !

2nd PartBarbara Morris proves that Free Form RPG is a game-changer, making RPG universally easy to code and maintain. Learn which “old-fashioned” RPG coding patterns to avoid.  Code modularity means breaking up code into smaller pieces for easier re-use.  But how to make existing monolithic RPG code modular ?  Start with a prior analysis of the code, and a gradual implementation of changes – from simple improvement of variable names, through to complex changes, such as pulling out a section of code into a procedure.  Safeguard your work with continual testing, already in place before making large-scale changes to the code.

3rd Part – Scott Forstie takes the subject of modernization down to the database, discussing the options for automated conversion to SQL and the rights and wrongs of a surrogate approach.

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
c/o Pramex International Limited
1 Austin Road West International Commerce Centre
7107B 71/F Tsim Sha Tsui HONG KONG, Yau Ma Tei
Hong Kong
sales-asia@arcadsoftware.com

DevOps Facts & Predictions – Infographic

by Olivier Bourgeois | September 6, 2018

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Infographie - Faits et Prédictions DevOps
DevOps for IBM i White Paper thumbnail

Improve your DevOps skills!

White Paper

This White Paper describes the opportunity, the challenges and the solutions offered by DROPS as you rollout a DevOps strategy in your multi-platform environments.

Download the White Paper

2018-11-28T15:25:32+00:00 Blog|

Secure the missing link in your Application Release process

Secure the missing link in your Application Release process.

Deployment is by far the most critical phase in software delivery. Any incident can have costly consequences on the availability of applications and even your company’s reputation.

Many organizations already automate application deployment, but still run a major risk:  Reliability in Production.

Whatever the technologies you employ – in our webinar, you’ll learn how to:

  • Secure the deployment process
  • Minimize the risk of errors in production
  • Keep operational control over application availability
  • Safeguard against costly downtime

Protect your business’s bottom line. Watch our Webinar!

Demonstration

Whether you run your business on Windows, UNIX, Linux, IBM i (aka iSeries, AS/400) or mainframe z/OS platforms, application reliability in production is a critical and constant concern.

In our Webinar, you will learn how to rapidly:

  • Return to a previous stable application state in the case of error,
  • Rollback your database upgrades,
  • Check the integrity of your deliveries before triggering a deployment,
  • Integrate your entire application portfolio, including software packages,
  • Manage all architectures (Legacy, Web, Mobile, Cloud) with one single tool,
  • Comply with regulations re. separation of roles and responsibilities,
  • Coordinate deployment with other daily operations tasks.

With concrete examples we’ll show how you can complete your DevOps strategy using existing enterprise tools (GitHub, Jira, Jenkins, Ansible, Docker, etc.).

 

Watch the replay

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of consulting experience in software development, Floyd is DevOps Advocate for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps migration and Enterprise Modernization projects, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of CI/CD, Test Automation and Application Lifecycle Management.

Ray Bernardi

Senior Consultant, ARCAD Software

Ray Bernardi is a 30-year IT veteran and currently Senior Consultant for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) covering a broad range of functional areas including enterprise modernization, CI/CD and DevOps.  In addition, Ray is a frequent speaker technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, DevOps, and business intelligence.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
Room 22, Smart-Space 3F – Units 908-915, Level 9, Cyberport 3
100 Cyberport Road – Hong Kong
Phone: +852 3618 6118
sales-asia@arcadsoftware.com

2018-10-26T11:59:37+00:00 On-demand Webinars|

Orchestrate a CI/CT/CD pipeline for IBM i using Git, Jenkins and JIRA

Orchestrate a CI/CT/CD pipeline for IBM i using Git, Jenkins and JIRA

Looking to orchestrate a continuous delivery pipeline for all your IBM i code – RPG, CL, DDS or COBOL – using the same tools as on your open systems?

…Automate the integration, test, and delivery of your RPG changes?
…Share a common source code repository between your IBM i and open-systems developers?
…Ensure that continuous test is an integral part of your CI/CD workflow ?

In our Webinar, we’ll demonstrate how you can achieve all this with an integrated CI/CT/CD pipeline on IBM i using your standard enterprise tools Git, Jenkins and JIRA:

  • continuous integration (CI) and dependency build of RPG, CL, DDS, …
  • continuous “regression” test (CT)
  • continuous deploy (CD) & rollback on error

Simplify your DevOps toolchain across IBM i and open systems.  Watch the Webinar!

Watch the replay

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

Ray Bernardi

Senior Consultant, ARCAD Software

Ray Bernardi is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.  In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, SQL, and business intelligence.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
Room 22, Smart-Space 3F – Units 908-915, Level 9, Cyberport 3
100 Cyberport Road – Hong Kong
Phone: +852 3618 6118
sales-asia@arcadsoftware.com

2018-09-07T16:53:05+00:00 On-demand Webinars|

2018 will undoubtedly mark the advent of the digital era

Dear customer / Dear partner,

2018 will undoubtedly mark the advent of the digital era. All companies have understood that they have to adapt to this new world or risk outright disappearance. The good news is that they have the means and the motivation to do so and many have already targeted their investments in this direction.

Entering the digital era is first and foremost a realization that future users, customers or partners will be those young generations who are entering the job market, infused with digital in their daily lives, and who are revolutionizing all established codes.

The digital age means thousands of new mobile applications that need to interact with the core systems. It’s also thousands of webservices developed and web interfaces with an enriched user experience.

This new mix of technologies and the necessary adaptations in the information system make DevOps an essential strategy for all IT organizations, large and small.

If many companies are already mature in the implementation of their “DevOps journey”, it is often applied only over their new technologies. This is far from the case when we observe their so-called “legacy” systems. The new challenge for them is to extend the DevOps approach across their entire information system. Here again, the adaptation of their IT organization to younger generations. Without that shift, who will maintain these critical applications at the very core of the company’s business?

We live in an exciting era of profound change and opportunity. The strength of Arcad’s enterprise and technology is its ability to be credible to populations that are so different in terms of culture, age and experience. We were the first to integrate within our offering all the tools, open source or not, from the open world and which are already very popular on the market. This approach makes it possible to generalize a DevOps strategy, whatever the technologies and the languages ​​used. It brings credibility to the use of these tools in the legacy world, while trivializing legacy platforms within the entire information system. The transitional phase will probably be long, but at least it gives a very clear strategic direction. 2018 will be, we are convinced, the advent of the “DevOps for legacy” era.

You will find in this newsletter many examples that illustrate my point.

Yours sincerely,

Philippe Magne

Philippe Magne

CEO And Chairman

2018-02-14T16:48:58+00:00 Miscellaneous|

Employee spotlight – Interview of our Indian developers

Discover the interview of developers joining recently Arcad (more…)

2018-11-28T15:24:23+00:00 Blog|