»News & Events

5 keys to IBM i Modernization Success

by Alexandre Codinach | september 27th 2018

Successful IBM i application modernization projects are those that find the right balance between IT and business objectives.

These objectives can take the form of:

  • Improved system maintainability, flexibility, and scalability
  • Adoption of new tools and methods of development
  • Reduced risks and operational costs
  • Reduced time to market
  • Improved customer satisfaction and productivity
  • Easier hiring of skilled resources

Whatever the reason for a modernization project for a legacy system like IBM i, it is important to identify some key points for the success of the project:

1. Obtain backing from general management

Whatever its scope, a modernization project is a business project that goes beyond IT issues alone.  The stakes relate to the performance of the company, its development and sometimes its survival, although the subject matter may be somewhat obscure to the layman.

Tip:  “Popularize” the modernization project by conveying the business value associated with the technical gain.  Translate the technical argument into a business argument, and weigh any short-term impacts against the Return on Investment at the end of the project.  Secure management backing right at the start through an understanding of the business value gained from modernization and the risk of inaction.

2. Define an overall modernization roadmap

In such a project, not everything can, or must, be modernized.

We are dealing not with modernization but modernizations (in plural). The approach must not be “manichean”. Techniques like modernization of the existing system, reengineering and / or software packages are not necessarily incompatible.

The “silver bullet” from new to modern does not exist. Complete renewal within 3 years is a fantasy. Modernization is a continuous, staged process, which must interleave quick wins and longer term goals.

Tip:  Plan regular communication points so that everyone in the organization visualizes and understands the issues. Including resources from the business side and defining clear business indicators will help this process.

3. Involve staff early, to include all impacted parties

Just like any IT project, even when it outsourced, modernization consumes staff resources.

Over and above the technical side of the project, it is important to take into account an overall change management process within the organization, from IT right through to the business users whose interaction with the application can be changed significantly in such a way as to impact their daily work.

Tip:  Involve impacted staff right from the analysis phase of the project, to participate in the decision making process and be the first lever of communication with the teams.

4. … Secure through automation

As work is underway, business must go on: modernizing must NOT mean putting projects on hold and ceasing to deliver new features needed by the business lines.

Automating your application lifecycle reduces risks and increases the productivity of IT staff by allowing them to focusing on value-added work. In the end, this means it will be easier to allocate resources.

Continuous integration and deployment (CI / CD) will help you reduce development times and secure the reliability of applications in production.

5. Test for non-regression

Often it is only internal teams and sometimes even only the business users that are able to provide useful scenarios for regression testing. Prepare these scenarios carefully before the project.

You must be able to verify that the modernization process, however wide-reaching, has not resulted in unexpected side effects that could degrade the operation of your application.

Run these tests again during the modernization project and check for errors.

Finally, if you do use external teams for all or part of your project, ensure that a non-regression guarantee is included.

It is vital to ensure that the system will continue to meet requirements.

Tip:  Benefit from the investment in testing needed for this project to bring long term improvements in your company’s testing process.

Conclusions

  • Communicate on, and build support for, your project
  • When defining the scope of your project, run a functional audit in addition to the technical audit
  • Anticipate the staffing needs to complete your project
  • Secure the project through automation, to ensure application availability for your end users
  • Check that the system continues to meet requirements using automated regression testing
Modernization as a Service White Paper

Modernization as a Service

White Paper

This paper examines the problems associated with maintaining often mission critical IBM i (aka iSeries, AS/400) legacy applications on IBM Power systems.

Download the White Paper
Enterprise Modernization for IBM i

Enterprise Modernization for IBM i

Brochure

“Through enterprise modernization, IBM i organizations can leverage their competitive advantage and R&D investment on a uniquely reliable platform strategically positioned for mobile and cloud technologies into the future.”

Read the Brochure

Alexandre Codinach

Alexandre Codinach

VP Sales and Operations Americas

Alexandre Codinach has 30 years of IBM i experience, both technical and managerial, with specialized expertise in the field of IBM i modernization.  With a 360 degree view of IBM i, Alexandre has excelled in many roles, including application architecture, project management, pre-sales and consulting.  As ARCAD COO, his in-depth knowledge of IBM i technology and ability to coordinate large, complex IBM i projects on an international scale have made him a trusted advisor in the rollout of ARCAD’s “Modernization as a Service” projects worldwide.

2018-12-06T09:47:03+00:00 Blog|

Test Automation and Source Code Analysis for IBM i: why bother enforcing a new quality gate?

By Nick Blamey | November 27th 2018

Quality Gate

What’s in it for the developers and why is it needed for DevOps – a thought inspiring blog by ARCAD’s Director of Northern European operations, Nick Blamey

The business problem:  Continuous Quality of applications depends on implementing policies that enforce immediate validation.  But CIO’s responsible for diverse application assets are lacking code guidelines from where to start measuring code quality and also the resources to allocate on this activity.

Continuous Quality (CQ) and DevOps

Software defects drastically increase the cost of application development. Finding and fixing errors in production is often 100 times more expensive than finding and fixing it during the design and coding phases.  It is vital that teams incorporate quality into all phases of software development and automate quality verification as far as possible to locate defects early in the process and avoid repeat effort.  This is what is meant by “Continuous Quality” or CQ, which forms an essential safeguard – or quality gate – in the rapid delivery cycles in DevOps and CI/CD workflows today.

Which techniques are available for Continuous Quality?

Static code analysis is the simplest and most effective method to prevent defects and harden code while accelerating application delivery.

Automating the code analysis as early as the build or Continuous Integration phase means your team can find and fix systemic defects when the cost of remediation is at its lowest.  After the initial investment in configuring rules and metrics the gains in efficiency become exponential over the development lifecycle.

To achieve continuous quality, organizations can employ a number of strategies.

  • Static analysis of source code to identify complexity hotspots, deviations from standards, security loopholes, etc.
  • Peer reviews to check code written by one’s equals (peers) to ensure it meets specific criteria.
  • Unit testing to execute and scrutinize the individual modules, or units, of an application, for proper operation.
  • Regression testing to repeat a set of test scenarios on a new application release to identify any deviations from normal operation.
  • Monitoring of the application in production to ensure it is operating correctly after each update and at all times.

To be effective in a DevOps environment each of the above techniques must be both automated and continuous, integrated within the CI/CQ/CD workflow.

How important is Source Code Analysis (SCA) for DevOps?

Source code Analysis:  what alternatives?

Faced with the challenge of auditing an existing IBM i code base for quality, CIOs have a limited number of choices:

  • Complex peer review process: even supported by collaborative tooling, the manual effort involved in a peer review can be difficult to manage across large teams with a range of expertise.
  • External audit of source code by experts: lacking in application knowledge, the learning curve for external code auditors is often steep, making this an expensive option with often unquantifiable benefits.
  • Continuous source code analysis using an automated solution designed specifically for an IBM i RPG code base

The cost of defects in application development

A study in software quality from Capers Jones in 2008 came to two very important conclusions:

  • Development “wastage” and defect repairs (from deployed code) absorbed almost two-thirds of the US software workforce – leaving only one third for productive work on new projects.
  • 50% of software development project budgets are spent fixing poor quality code; fewer than 6% of organizations have clearly defined software management processes in place; and software projects of 100,000 function points in size have a failure rate of 65%.

More recent articles on this topic suggest that for many organisations the statistics remain very much unchanged today.

Since DevOps has now taken over as the key driver in most development shops there is a massive potential for optimisation to eliminate the challenges described above and make developers more efficient in their work by spending more time coding and less time fixing defects.

The limitations of functional testing

Most organizations perform functional testing of their applications through the UI, required for compliance reasons.  However, like black box testing, since each and every defect needs to be diagnosed from the UI, this approach brings little information to help developers to actually fix problems.  The result is typically a constant stream of defects classified as:

  • cannot reproduce the issue
  • test environment not set up correctly
  • require more information

With poor information for developers, the challenges are pushed downstream.  Projects face delays due to lengthy code understanding and a sub-optimal debugging approach. This severely limits the ability for any IBM i development team to maintain the “speed of delivery” required for meeting their DevOps targets.

Static Code Analysis: Who does it and why

There are 3 main approaches to static code analysis in the multi-platform world: Static Analysis for Security, Static analysis for Code Complexity and Static Analysis for Code Quality.

Many products exist to perform this task and the market is large and expanding with a few dominant players. The solutions are often extremely expensive and tend to be less relevant on the IBM i which is less susceptible to security issues than other platforms.

IBM AppScan Source is the best example of the market leader for Code Security but MicroFocus also offer the Fortify Security Suite with a number of additional tools available from other vendors e.g. CheckMarx, Klocwork, CA Veracode.

For Code Complexity metrics, the key players include CAST and McCabe but neither offer support for RPG on the IBM i.

Why do IBM i Developers need Source Code Analysis?

Given the multiple variants of RPG and the sheer longevity of applications, developers on IBM i face a unique challenge with legacy code bases containing millions of lines of code that have been maintained for sometimes thirty years by successive developers.  It is laborious to understand program logic and assess the quality of code – resources are diverted to address the “technical debt” of the code base.  The challenge is greater still given the ever-growing shortage of RPG skills in the market.  The new Free Form RPG syntax has changed the game, offering a means of onboarding a new generation of developers – making the conversion of RPGLE applications to Free Form the “burning platform” of our day.

Source Code Analysis as key part of any “legacy code base Audit”

Source Code Analysis has the potential to be delivered as part of a wider code audit process.  Companies like ARCAD have built solutions that generate a complete metadata overview of the entire code base, enabling a deeper level of analysis and integrity checking.  Here source code analysis is delivered as part of the code audit and rules and metrics are used to enforce local standards.

ARCAD CodeChecker can create a Code Quality Baseline from which the RPG Code Base can be continually improved through accurate and regular measurement of code quality. This allows CIOs and Development leads to show application owners that they are consistently delivering against ISO 27001 continual improvement goals of the wider organisation.

Widen your net to catch Code Quality issues for RPG

As described above, RPG is a special case in the development world.  Among the standard source code analysis tools, a few (such as SonarCube) are able to perform a simple RPG Code Review and Static Quality Analysis, but they are severely limited in their coverage (for example, lacking support for the many RPG variants) and the number of rules they can enforce (limited to around 30 mainly code documentation guidelines).

The potential business risk of these limited tools is that:

  • They are not really usable for code quality guideline enforcement for RPG specifically
  • They tend to create a number of false positives which limits their effectiveness and could in theory eliminate any value introduced in the Peer Review process by forcing developers to debug issues which are caused by the tool itself.

Modern DevOps organizations are now looking for an “industrial strength” solution to this challenge to ensure that an implementation an open source Source Code Analysis tool doesn’t itself become a bottleneck in the DevOps workflow.

The design goals of ARCAD Code Checker from ARCAD have been therefore to fit with modern large DevOps oriented IBM i RPG development teams, emphasizing:

  • Rapid scan of an entire code base
  • Auto-Tune the Quality rules for enforcement on a code base by code base, stage by stage basis
  • Provide real value to the developers performing the coding work in rapid feedback to the standard of their work after each and every edit. (see section below)
  • Seamlessly integrate into a wider ARCAD DevOps tools chain offering, Rapid and complete x-referencing and Auditing, Source Code Management, Dependency Building, Automatic Testing (with deep dive diagnostics of errors) and Deployment and Release Automation for IBM i LPAR environment management.

RAPID VALUE from ARCAD CodeChecker for your CodeBase

Productivity gains through Source Code Analysis Quality Gate enforcement

Typically, if a developer knows within a few minutes of writing code that a guideline has been breached with a desktop code checking product like ARCAD CodeChecker, they can fix issues immediately with minimal impact on quality. If Code is “peer reviewed” a developer could be waiting for days even weeks for feedback by which time the developer has moved to other tasks. The best analogy for this would be a grammar checker with word processing i.e. if you know immediately as you write text that you have made a grammar error, you can fix it immediately whilst the sentence is still in your mind – compared with the time it takes if you perform a grammar check 2 weeks after you wrote the sentence. In this case of course the writer of the text will spend 90% of the grammar check and correction time trying to understand the context of what he/she wrote 2 weeks before.

Driving enforcement of Code Quality by offering real and immediate value to Developers

Many Source Code Analysis tools have a bad reputation. CIOs and Development management must constantly make risk management decisions between: slowing down the development process and the resultant business owner pressure to maintain the speed of delivery vs. introduction of technical debt which could cause a large business risk in the future if Code Quality Guidelines are not enforced.

Code Checker from ARCAD has been designed to add immediate value right at the developer’s workspace / desktop through its integration with RDi and SEU. ARCAD designed Code Checker to cope with the regular comments from RPG developers “If you are going to Mark my homework, at least time me how you are going to judge my success of failure”.

In addition to the optimisation of the Peer Review process through Automatic Code Quality Static Analysis, ARCAD CodeChecker can offer value to developers and your inflight projects:

  • Creation and enforcement of Source Code Quality guidelines delivered automatically eliminates the need for Peer Review: allowing developers to focus on Coding more quickly vs peer review.
  • Putting in a process through Source Code Analysis for the elimination of technical debt means that IBM i RPG team can stay up to speed with other teams within organisation in a DevOps world.

Combining Source Code Analysis with Testing Automation, Database integrity checking and helping Developers to debug complex issues more rapidly.

Modern Development Teams face this problem

DevOps Bottleneck effect from manual Source Code Analysis, Testing, and Debugging, re-test cycle

DevOps Bottleneck effect from manual Source Code Analysis, Testing, and Debugging, re-test cycle.

To cope with an acceleration of the DevOps cycles from a few releases per year to more regular releases i.e. monthly or even weekly releases, organizations are driven to perform more regular testing, normally delivered through automation.  They are also impelled to eliminate a lot of the effort which goes into the “localisation of defects” to their root cause, to ensure that developers can drive higher quality code without an impact on the timeframes required by application owners.

Shift left as Key Driver for DevOps:

The graph below shows a typical IBM i / RPG Defect graph i.e. the amount of defects which occur over the time from the start of the project to the actual release date.

Cost of defects across the development lifecycle

Cost of defects across the development lifecycle

Though typically 85% of defects are introduced in the early coding phases of the DevOps cycle, the cost to repair defects grows exponentially through the later phases of test and delivery, reaching inestimably high costs when a defect is found in production, with potentially a significant impact on business bottom line and reputation.

It is clear to see that by “shifting left” the detection of defects, their cost and impact is minimized.

ARCAD Software, through their work with many development teams on RPG have seen that developers perform a number of tasks to advance the detection of errors. These include:

  • Hand Coding unit tests themselves to exercise individual program functionality and make sure that they haven’t created defects as they develop.
  • Testing individual Batch processes are still working after any changes are made to specific programs.
  • Re-setting and working with complex Test Data including Anonymisation requirements.
  • X referencing defects across a multitude of components / programs to understand the impact of each code change across other RPG programs and also NON -IBM i x references.
  • Scripting the deployment of new code once compiled onto the different LPARS (dev, QA, Prod etc.) and then performing a manual check that once deployed each of the LPARS is fully functioning. This process is typically referred to as “test environment assurance”.
  • Preparing the LPAR for a full end to end test execution including load testing and end to end functional testing.

Yet from ARCAD’s experience, each of these processes when performed manually causes additional cost, effort and risk of bottleneck in a DevOps deployment process.

To cope with these challenges, ARCAD offer a number of tools in addition to CodeChecker (Source code analysis) to eliminate bottlenecks in the DevOps process and provide a frictionless process from Functional Specification through to Coding, Unit Testing, Compilation, Build, End to end Functional Test and Production deployment :

  • ARCAD Verifier (BATCH and UI Testing)
  • ARCAD DOT Anonymizer,
  • ARCAD Observer for X-referencing and
  • ARCAD Builder and Drops for deployment automation and Release Management.

Each of these solutions can add additional value to your Development Process, shifting left to reduce overall cost in the development cycle:

Contribution of ARCAD solutions to a “shift left” of development costs

Contribution of ARCAD solutions to a “shift left” of development costs

SpareBank Success Story

SpareBank reduced costs of environment management & compliance by 70%

Case Study

For example, SpareBank1 is one of ARCAD’s leading customers and managed to achieve a 70% elimination of Test Environment Assurance effort through the use of the ARCAD suite

Read the Story

ARCAD view and positioning

As a company ARCAD began its evolution fixing the Source Code analysis problem of Year 2000 date format changes. Since then ARCAD have provided solutions to the most burning and current challenges our 350+ customers face with their RPG code bases: X referencing, Auditing, Source Code Management, Building, Testing and Deploying.

ARCAD for DevOps: suite of solutions integrated over a repository core

ARCAD for DevOps:  suite of solutions integrated over a repository core

ARCAD Steps

Suggested next step:

An Audit process using ARCAD expertise and tooling is an excellent starting point to your journey to a quality DevOps process.

To find out more about how ARCAD have designed their solutions to fix the next problem in Source Code Analysis. Contact ARCAD and see how CodeChecker and other ARCAD DevOps tools for IBM i can help with your CodeReview, Audit, Testing and DevOps processes.

Nick Blamey

Nick Blamey

ARCAD’s Director of Northern European operations

Nick Blamey joined ARCAD from IBM where he was responsible for DevOps and Rational solutions in various roles across Europe. Previously Nick worked for other software development tools orgainsations including: HP / MicroFocus, Fortify Software (acquired by HP), Empirix (acquired by Oracle), Radview and Segue (now Microfocus). Nick is a thought leader in the areas of Static Code Analysis, Testing Automation, DevOps and Shift-Left strategies.

2018-11-29T12:00:30+00:00 Blog|

ARCAD Software launches “Pay-per-Use” system for DROPS, their flagship Application Release Orchestration (ARO) solution

ARCAD Software launches “Pay-per-Use” system for DROPS, their flagship Application Release Orchestration (ARO) solution

Annecy, France and Peterborough, NH, USA – 12 November 2018 – ARCAD Software, market leader in Enterprise DevOps and Modernization solutions, today announced the launch of a new Pay-per-Use pricing system for DROPS, their flagship Application Release Orchestration solution.

(more…)

2018-11-14T17:29:29+00:00 Press Articles|

Anonymize your test data to prevent a data breach

Anonymize your test data to prevent a data breach

In our previous webinar, we covered how Test Automation is an integral component of the DevOps and agile methodologies. Yet for testing to be effective, you need realistic test data available. A central issue is that this data often comes from production.

This puts development shops particularly at risk of a data breach.

How to eliminate risk and maintain test quality?  Integrate data masking into the heart of your DevOps cycle.

Our Webinar will demonstrate how easy it is to implement high performance data anonymization across any DBMS.

Watch Now!

Watch the replay

(more…)

Getting Progressive About Regression Testing

Getting Progressive About Regression Testing

If you want to employ modern software development and testing techniques, you have to move on from simple unit testing by developers and implement regression testing in your quality assurance (QA) organization. This is perhaps the best way to take the risk out of continuous development – something that companies have to embrace if they are to remain competitive.

The difference between regression testing and normal testing is that in the most common model, the developer has a request to fix a problem or to add a feature, and they make their changes and do unit testing, where they come up with test cases that will test the problem or the feature before they pass it off to QA to essentially run the same tests. Developers make a change and they know what result they are supposed to get back. If you add 2 plus 2, you know you are supposed to get 4. If 2 plus 2 equals four, then unit testing is successful, whereas regression testing is testing all of the functionality, and this is much broader. Unit testing is not looking for broader impacts.

Read the whole Article
2018-10-26T09:38:08+00:00 Press Articles|

The rise of Enterprise DevOps: solving the IT silo challenge

By Olenka Van Schendel | October 23rd 2018

Silos

In 2018 the enterprise IT silo problem still persists.  The disconnect between Digital initiatives and Legacy development continues to drain IT budgets and allows inconsistent deliveries to reach production.  Errors detected at this point have a direct business impact: the average cost of a major incident in a strategic software application in production per hour is 1 M$, that’s tenfold the average cost of a hardware malfunction per hour (*).  And it’s estimated that 70% of errors in production are simply due to deployment errors, and only 30% due to faulty code.  Yet CIO’s responsible for today’s diverse IT cultures are lacking visibility and control over the software release process.

What solutions are emerging?  Since the last Gartner Symposium, we are seeing Release Management technologies and DevOps converge.  Enterprise DevOps is coming of age.

As a mainstream movement, the DevOps community is assuming the operational reponsibility that comes with success. The agility of “Dev” tackles the constraints and corporate policies familiar to “Ops”.

From CI/CD to Enterprise DevOps

IT environments today are comprised of of a complex mixture of applications each one made up of potentially hundreds of microservices, containers, and multiple development technologies – including legacy platforms that have proven so reliable and valuable to the business that even in 2018 they still form the core of many of the world’s largest business applications today.

Many CI/CD pipelines have done a fair job in provisioning, environment configuration, and automating the deployment of applications. But they have so far failed in giving the business the answers to enterprise-level questions around business the answers to enterprise-level challenges around new regulations compliance, corporate governance and evolving security needs.
What are called DevOps pipelines today are often custom-scripted and fragile chains of disparate tools. Designed primarily for cloud-native environments, they have successfully automated a repeatable process for getting applications running, tested and delivered.
But most are lacking the technology layer needed to manage legacy platforms like IBM i (aka iSeries, AS/400) and mainframe z/OS, leaving a “weak link” in the delivery process.  This siloed approach to DevOps tooling carries the business risk of production downtime and uncontrolled cost.

Solutions are emerging. Listen to SpareBank1‘s experience for a recent example. The next phase in release management is already with us. Enterprise DevOps offers a single, common software delivery pipeline across all IT development cultures and end-to-end transparency on release status.  This blog explains how we got here.

What has been holding DevOps back? Bimodal IT holds the key.

The last few years have seen the emergence of “Bimodal IT“, an IT management practice recognizing two types – and speeds – of software development, and prescribing separate but coordinated processes for each.
Gartner Research defines Bimodal IT as “the practice of managing two separate but coherent styles of work: one focused on predictability; the other on exploration”.
In practice, this calls for two parallel tracks, one supporting rapid application development for digital innovation projects, alongside another, slower track for ongoing application maintenance on core software assets.

Bimodal IT

According to Gartner, IT work styles fall into two modes. Bimodal Mode 1 is optimized for areas that are more predictable and well-understood. It focuses on exploiting what is known, while renovating the legacy environment into a state that is fit for a digital world. Mode 2 is exploratory, experimenting to solve new problems and optimized for areas of uncertainty. These initiatives often begin with a hypothesis that is tested and adapted during a process involving short iterations, potentially adopting a minimum viable product (MVP) approach. Both modes are essential in an enterprise to create substantial value and drive significant organizational change, and neither is static. Combining a more predictable evolution of products and technologies (Mode 1) with the new and innovative (Mode 2) is the essence of an enterprise bimodal capability. Both play an essential role in the digital transformation.
Legacy systems like IBM i and z/OS often fall into the Mode 1 category. New developments on Windows, Unix and Linux typically fall into Mode 2.

The limits of CI/CD

Seamless software delivery is a primary business goal. The IT industry has made leaps and bounds in this direction with the widespread adoption of automated Continuous Integration/Continuous Delivery (CI/CD). But let us be clear about what CI/CD is and what it is not.
Continuous Integration (CI) is set of development practices driving teams to implement small changes and check in code to shared repositories frequently. CI starts at the end of the code phase and requires developers to integrate code into the repository several times a day. Each checkin is then verified by an automated build and test, allowing teams to detect and correct problems early.
Continuous Delivery (CD) picks up where CI ends and spans the provision-test-environment, deploy-to-test, acceptance-test and deploy-to-production phases of the SDLC.
Continuous Deployment extends continuous delivery: every change that passes the automated tests is deployed to production automatically. By the law of DevOps, continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.
The issue is that most CI/CD pipelines are limited in their use to the cloud-native, so-called new technology side of the enterprise. Enterprises today are awaiting the next evolution, one of a common, shared pipeline across all technology cultures. To achieve this, many organizations need to progress from a simple automation to business release coordination, or orchestration.

DevOps facts & Predictions Infographic

DevOps Facts & Predictions

Infographics

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Discover the Infographic

From Application Release Automation (ARA) to Orchestration (ARO)

Application release automation (ARA) involves packaging and deploying an application/update/release from development, across various environments, and ultimately to production. ARA tools combine the capabilities of deployment automation, environment management and modeling.
By 2020 Gartner predicts that over 50% of global enterprises will have implemented at least one application release automation solution, up from less than 15% in 2017. Approximately seven years old, the ARA solution market reached an estimated $228.2 million in 2016, up 31.4% from $173.6 million in 2015. The market is continuing to grow at an estimated 20% compound annual growth rate (CAGR) through 2020.
The ARA market is evolving fast in response to growing enterprise requirements to both scale DevOps initiatives and improve release management agility across multiple cultures, processes and generations of technology. We are seeing ARA morph into a new discipline, Application Release Orchestration (ARO).
One layer over ARA, Application Release Orchestration (ARO) tools arrange and coordinate automated tasks into a consolidated release management workflow. They further best practices by moving application-related artifacts, applications, configurations and even data together across the application life cycle process. ARO spans cross-pipeline software delivery and provides visibility across the entire software release process.
ARO forms the cornerstone of Enterprise DevOps.

Enterprise DevOps: Scaling Release Quality and Velocity

Enterprise DevOps is still new, and competing definitions are appearing. Think of it as DevOps at Scale.
Like Bimodal IT, large enterprises use DevOps teams to build and deploy software through individual, parallel pipelines. Pipelines flow continuously from development to integration and deployment iteratively. Each parallel pipeline use toolchains to automate or orchestrate the phases and sub-phases of the Enterprise DevOps SDLC.
At a high level the phases in the Enterprise DevOps SDLC can be summarized as plan, analyze, design, code, commit, unit-test, integration-test, functional-test, deploy-to-test, acceptance-test, deploy-to-production, operate, user-feedback.
The phases and tasks of the ED-SDLC can differ within each pipeline or there can be a different level of emphasis on each phase or sub-phase. For example, in bimodal mode 1 on a SOR the plan, analyze & design phases may be of greater importance than in bimodal level 2. In bimodal mode 2 on a SOE the frequency of the commit, unit test, integration test and functional test may be emphasized.
Risk of deployment error is high in enterprise environments because toolchains in each pipeline differ, and dependencies exist between artifacts in distinct pipelines. Orchestration is required to coordinate the processes across the pipelines. Orchestration equates to a more sophisticated automation, with some built in intelligence and an ultimate goal to be autonomic.

How to transition Legacy systems to DevOps?

In response to the challenges of Bimodal IT, we have reached a point where classic DevOps and Release Management disciplines converge.
For over 25 years Arcad Software has been helping large enterprises and SMEs improve software development through advanced tools and innovative new techniques. During this time, we have developed deep expertise in legacy IBM i and z/OS systems. Today we are recognized by Gartner Research as a significant player in the Enterprise DevOps and ARO space for both legacy and modern platforms.
Many ARO vendors assume greenfield developments on Windows, Unix and Linux and hence legacy systems become an after-thought. ARCAD is different; we understand the need to get the most from your companies’ investment in legacy systems over the past decades, and also the demands and challenges of unlocking the value within these legacy applications.  ARCAD can ensure you can offer your application owners and stakeholders a practical, inclusive step-by-step solution to deliver both DevOps and ARO for both new and legacy applications vs. an expensive and risky rip-and-replace project.

Leveraging existing CI/CD pipelines

There are a huge number of tools available to organisations to deliver DevOps today. Tools overlap and the danger is “toolchain sprawl”. Yet no one tool can address all needs in a modern development environment. It is essential therefore that all selected tools can easily integrate with each other.
The ARCAD for DevOps solution has an open design and integrates easily with standard tools such as Git, Jenkins, JIRA, ServiceNow. It is capable of orchestrating the delivery of all enterprise application assets, from the most recent cloud-native technologies to the core legacy code that underpins your business.

ARCAD has a proven methodology to ensure we leverage the value in your Legacy applications and avoid a rip-and-replace approach.  ARCAD solutions extend and scale your existing DevOps pipeline into a frictionless workflow that supports ALL the platforms in your business.

Modernizing your IT assets

If the future of legacy application assets is your concern, then complementary ARCAD solutions can automate the modernization of your legacy databases and code – increasing their flexibility in modern IT architectures, and making it easy to hire younger development talent and ensure the new hires can collaborate efficiently with older legacy team members.

With 25 years of Release Management experience working with the largest and most respected Legacy and Digital IT Teams across the globe, ARCAD Software has built in security, compliance and risk minimization into all of our offerings. This is exactly the place that DevOps is headed.

(*) Source: IDC

White Paper Enterprise DevOps

Enterprise DevOps White Paper

This paper attempts to debunk competing DevOps concepts, terminologies and myths in order to help make the path forward clearer and more practical.

Download the White Paper

SpareBank1 Case Study

Success Story SpareBank1 ARCAD for DevOps

SpareBank1 drives rapid development cycles on the IBM i, reducing costs of environment management & compliance by 70%

Read the story

Olenka Van Schendel

Olenka Van Schendel

VP Strategic Marketing & Business Development

With 28 years of IT experience in both distributed systems and IBM i, Olenka started out in the Artificial Intelligence domain and natural language processing, working as software engineer developing principally on UNIX. She soon specialized in the development of integrated software tooling including compilers, debuggers and source code management systems. As VP Business Development in the ARCAD Software group, she continues her focus on Application Lifecycle Management (ALM) and DevOps tooling with a multi-platform perspective including IBM i.

2018-11-28T12:52:33+00:00 Blog|

The evolution of DevOps

By Marc Dallas | October 9th 2018

DevOps practices have evolved in recent years in many organizations seeking to respond more effectively to their business challenges.
While DevOps previously focused primarily on IT services, it now extends across the entire enterprise, impacting processes and data flows and driving deep organizational changes.

DevOps, above all a management of change

Organizations that have embraced DevOps either to the full or even partially can already testify that this approach carries a significant ROI.
Many others have explored and come close to DevOps but have not yet taken the final step.
The main reason for latency is that a DevOps transition goes beyond the adoption of new tooling and into people and process, and most importantly it requires a careful management of change.
Indeed, DevOps is not just about choosing the right automation solution. It requires an accompanied transition, wherein lies the role and responsibility of the solution vendor. In a DevOps project, levels of maturity and understanding differ between organizations. A DevOps solution provider therefore has a duty to advise and support in the management of change and should add value to the project beyond a simple automation. Company specifics must be taken into account, and in particular, the scope and diversity of development cultures and technology platforms contained in the application portfolio. Without this, a DevOps project has no chance of success.

The emergence of DevSecOps and BizOps

The emergence of these new terms is directly related to the “complicated” relationship between Development and Operations.
Over a decade ago, development teams had already adopted mainstream agile methods and were releasing smaller software increments faster and more frequently, while operations – upholding their corporate constraints around application availability and compliance – became an apparent bottleneck in the process. To keep software development cycles fluid and deliver updates to the end-user at the speed of the business, operations had to follow this same agile trend.

The DevOps movement held the key. By enhancing communication, in such a way as to recognize and respect the constraints of each different department, we have transitioned into a dialogue, exchange and a set of processes that meet the needs of each profession and integrate their respective constraints in order to collaborative effectively. This is the essence of what is meant by DevOps.
The appearance of these new and related terms DevSecOps and BizOps is simply evidence of the extension of this level of communication to all departments in a company, a progression in business change.

DevSecOps, for example, aims to enhance security by integrating it early in the application development process. We could add other departments into the chain.
Above all, this means that today companies are realizing that there is a need to have a wider software supply chain which, at each link in the chain, integrates the same principles exemplified by DevOps.

BizOps is a more generic term. It describes an extended chain between business and operations. There is a contraction that we could finally call “BizDevSecOps”.
BizOps involves strategic and operational management. Indeed we should extend the term further than Ops today, as far as users (BizUsers).
We are reminded of terms like as BtoB or BtoC, except for the fact that with DevSecOps and BizOps we embark on a change in internal organization, necessary for the company to thrive. We retain a level of granularity in tasks to allow focus on solving problems in a particular area. It’s about defining and executing all the required actions and automating them in a continuous delivery environment.
This is the idea behind Release Coordination, right the way from the business strategy to the provision of new releases to the end-user.

DevOps facts & Predictions Infographic

DevOps Facts & Predictions

Infographics

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Discover the Infographic

The challenges of Enterprise DevOps

The concept of Enterprise DevOps elevates DevOps into a business strategy, a process that adds value to the organization, not just IT.
The issues in terms of identification, validation of releases between different services, causes of bottlenecks, decision times, implementation or delivery durations, if they are examined at a DevOps scale, can be an area for experimentation. We will then be able to extend this inter-department cooperation across to the entire company, which will allow de facto to increase the overall Return on Investment.
And this is the challenge of Enterprise DevOps: that the entire company becomes aware of the added value brought by this change of collaboration between services.
All this microscopically managed work between Dev and Ops will then be implemented on a macroscopic scale across the entire enterprise chain (from the strategic decision to the end user).

The question of DevOps for Database

Although it is not new, the consideration of data in DevOps is gaining momentum.
In order to save time and reduce development effort, the concept of parameterizing data (whatever the data type, structure and the underlying data management technology) was introduced in order to modify program behaviour depending on specific data entered.
Parameter data therefore has an impact on the behavior of program execution. As such, these data actually belong to the field of development and operation of the application.

Generally, as the data volume remains low, typically very basic processes are used for the transfer of parameter data to production.
These elementary processes therefore do not usually cater for the rollback of data, or the identification of the version number of the installed system – capabilities that are considered low priority as the volume of data is relatively small.
Yet the critical nature of parameter data makes these processes in reality very important.
By underestimating their importance, we introduce a weak link in the quality chain, and run the risk of an incident in production that can cause huge financial losses, but also a loss of confidence in the deployment process.
It is therefore vital to not focus solely on the frequency and scale of deployment, but also on the criticality of the data that is being deployed.
Parameter or configuration/settings data must follow the same quality chain as the applications themselves, as is the promise of “DevOps for Database“.

Conclusion

  • DevOps is not just about process automation, it involves a true management of change
  • The terms DevSecOps and BizOps reveal that companies now contend the need to have an enterprise-wide software supply chain
  • The added value of inter-department collaboration is realized across the wider enterprise
  • Often critical data must follow the same quality chain as the applications

White Paper « DROPS for DevOps »

DROPS for DevOps White Paper

This White Paper describes the oppportunity, the challenges and the solutions offered by DROPS as you rollout a DevOps strategy in your multi-platform environments.

Download the White Paper

Systeme U Case Study

Système U Success story

Systeme U cuts application deployment costs by 40% using DROPS on IBM i & Linux

Read the story

Marc Dallas

Marc Dallas

R&D Director

Software Engineering degree from the Integral International Institute, Marc started his career in 1994 as Analyst Programmer at Nestle Cereal Partners, and was appointed Product Manager at ADSM Software, prior to joining ARCAD Software in 1997.

2018-11-28T13:03:37+00:00 Blog|

Continuous Testing (CT) in your DevOps Strategy

Continuous Testing (CT) in your DevOps Strategy

As DevOps drives faster and more frequent software delivery, the greater the pressure on testing staff. Each update needs to be regression-tested to avoid the risk of downtime. At this rate of change, manual testing becomes a bottleneck, and is often the first task to be sidelined.

If you are testing manually, watch our Webinar to learn how to automate the process of Continuous Test, to catch errors as early as possible in the cycle :

  • Increase your team’s productivity
  • Shorten time to delivery
  • Increase application reliability in production
  • Reduce IT costs

We will demonstrate how easy it is to record test scenarios from your 5250, client/server and web interfaces. Learn how to automatically replay all scenarios impacted by a software change, and quickly identify errors via graphical reports.

Whether you have 2 Testers or 40 Business Analysts performing regression testing, watch it now!

Watch the Replay

(more…)

2018-10-26T15:07:58+00:00 On-demand Webinars|

5 most common questions about data anonymization

by Maurice Marrel | september 13th 2018

GDPR, data privacy, data protection regulations have raised more questions around the handling of data than ever before. We asked our DPO and anonymization expert, Maurice Marrel, to answer some of the most common questions facing our customers today.

1. What is the role of anonymization in GDPR compliance?

In recent years, “digital everywhere” has dramatically transformed the flow of data.
Production data is copied into test, QA or pre-production environments, and exposed to the eyes of testers, receivers or unauthorized developers on machines much less protected than production environments.
Many files are also shared with external partners, who often only require a small part of the data actually transferred.

This personal data must be protected from leaks and other indiscretions.
In response, specific new legislation has emerged, such as the GDPR in Europe.

These new regulations oblige the desensitization of confidential data.
Desensitization means transforming the data, using non-reversible algorithms.
However, the data must remain usable. A test user must still see on the screen, in the last name field, a modified last name that “looks like” a last name.
Similarly, the domain must remain the same: an IBAN / RIB or a social security number must stay valid and compatible with the requirements and validation checks made by applications to allow the tests to actually run.
These same constraints must still apply even in the case of data redundancy in legacy databases, or across multiple database management systems.
These concerns must all be taken into account by any anonymization solution.

2. Anonymization and pseudonymization – how do they differ?

Anonymisation ensures that the data can never be retrieved by any means, contrary to pseudonymization.

In a test environment, even if the machines are secure, it is the developers, testers, QA staff, and training personnel who have direct access to the data. It is therefore imperative to anonymize or pseudonymize the data upstream.
In the case of a pseudonymization, the data can optionally be kept encrypted in software metadata, so it can be retrieved individually on request, and only to authorized persons. The old data in this case are preserved. This can be useful for example to check specific, one-off problems in a test environment.

Pseudonymization is often the only solution that allows normal operation of applications and the completeness of test scenarios.
On the other hand, it is a potentially reversible technique due to the identification keys that may not be replaceable for technical reasons. Pseudonymization can leave identifiable data in place, such as customer numbers, which are sometimes the only link between data storage technologies (DBMS, files). Combining the data with each other can help malicious organizations statistically guess some of the original data.

3. Personal vs. sensitive data – what does this change for data handling?

According to the CNIL, personal data is “any information relating to a natural person who can be identified, directly or indirectly”. Whereas sensitive data refers to “any information that reveals racial or ethnic origins, political, philosophical or religious opinions, trade union membership, health or sexual orientation of a natural person”.

But this differentiation of data can be confusing.
The most important point is to identify the data to be anonymized. The goal is to prevent anyone being able to find links between these data. For example, you are unable to modify health status type data if the corresponding first and last names are anonymized.
For example, you are unable to modify health status type data if the corresponding first and last names are anonymized.

Anonymization therefore utilizes algorithms that apply to all types of data.

4. How can I safeguard IT performance when introducing anonymization?

It is important to not only consider performance alone, but also take security into account.
Anonymization means an additional process, and will therefore necessarily have an impact on performance. However, if it is well planned for, and its scope and requirements are well defined, any impact will be minimized. And on average, only about twenty percent of data needs to be anonymized.

In general, data when anonymized, will be retrieved directly from a production environment for insertion into a test environment. But even if users (developers, testers etc.) do not have access during processing, test environments are usually less protected.
The ideal solution, in this case, will be to make a copy of the production database. This will allow the first instance to remain available while the other is being anonymized.
The anonymized data will then be dispatched to the relevant test, QA and training environments.
Another solution is to isolate a copy of the production environments in test machines while limiting access during the anonymization, then distribute onto the test environment.

5. How can I identify which data should be anonymized?

Typically, anonymization is required for test environments.
A good knowledge of the overall scope of the database is important, because it will help in assessing which types of data will need to be anonymized.
It is also important to consider how specific data relate to each other, as some data are inseparable.
To assist the administrator, the discovery of the data eligible for anonymization must be as automated as possible, using algorithms catering for the various types of data.

But in some cases, anonymization is needed for production environments. This is especially the case with the “right to be forgotten“, which has been considerably reinforced by the GDPR.
Indeed, anyone residing in the European Union and whose organization holds personal data may take control over his/her data.
But in many cases, simply deleting this data would have a significant impact on other data. In such cases anonymization is therefore a better solution as it renders personal data inaccessible, while preserving the usability of data to allow normal application operation and consistency of results.
Take the example of an online commerce site. When a product is sold, out-of-stock, money-in, or parcel-delivery data are necessary for the the business to operate and cannot be removed. However, the name of the buyer, his address or banking data can be.
The right to be forgotten, whether it results from a specific request or a regulation on the conservation of historical data, is the most common reason for anonymizing a production environment.

Conclusions

  • Anonymization meets the requirements of the GDPR because it transforms data irreversibly, while retaining its usability
  • Anonymization concerns all data, personal or sensitive
  • If the anonymization scope and requirements are well defined and planned ahead, any impact on performance will be minimized
  • Anonymization may be necessary in a production environment in response to “right to be forgotten” requirements

White Paper « Protection of personal data »

Protection of personnal Data - White Paper thumbnail

This document details the fundamentals of the GDPR, and recommendations as to how to become compliant before the 2018 deadline.

Download the White Paper

DOT Anonymizer Datasheet

DOT-Anonymizer Datatsheet

The anonymization of personal data is an ever-sensitive subject. This document will show you how DOT Anonymizer helps you protect the confidentiality of your test data.

Download the Datasheet

Maurice Marrel

Maurice Marrel

Senior Solutions Consultant, DOT Software

Maurice Marrel has over 20 years experience on IBM i (and its predecessors) remaining actively involved in modernization projects at the forefront of technology on the platform. Now specializing in technical pre-sales and training for ARCAD’s solutions for Enterprise Modernization on IBM i, Maurice has a wide-ranging technical background including IT management in aerospace and energy industries, and project leadership in several technology sectors including software development tooling.

2018-11-29T18:42:03+00:00 Blog|

ARCAD releases new CodeChecker module to guarantee software quality and reduce DevOps risk

ARCAD releases new CodeChecker module to guarantee software quality and reduce DevOps risk

Peterborough, NH and Annecy, France – 16 September 2018 – ARCAD Software, leading vendor in DevOps and Enterprise Modernization solutions for IBM i, today announced the release of a new module in its DevOps suite: ARCAD-CodeChecker, for continuous source code quality analysis.

(more…)

2018-10-02T11:34:27+00:00 Press Articles|

3 steps to zero-risk Modernization on IBM i

3 steps to zero-risk Modernization on IBM i

Starting a modernization project on IBM i can be a daunting prospect, faced with the many options out there: Webservices? N-tier? Web, mobile? Java, .NET?

Join us for our three-part Modernization Webinar Series on September 18th, 25th and 27th with Barbara Morris, Scott Forstie and Tim Rowe, and learn how to get from A-Z with minimum risk !

Featuring actual case studies, our series is structured around a 3-step approach to risk-free modernization :

  • Step 1: Analyze – Where do I start to modernize? What are my choices?
  • Step 2: Structure – Laying a secure foundation with a structured DevOps process
  • Step 3: Transform – Automating the conversion of RPG source code, database and UI

Watch the Replay

Our special Guests

1st Part – Tim Rowe takes a tour of the very latest technology options on IBM i, with the goal of “Making IBM i normal !”.  Tim guides us into making the right choice of development language, database, method and tooling using the “best tool for the job”, taking performance and data integration into account.  Assess the use of open source tools like Git and Jenkins in an enterprise DevOps setting.   Learn the latest in connectors including MQ, JDBC, ODBC, REST and SQL Services…  A round trip of the “art of the possible” on IBM i !

2nd PartBarbara Morris proves that Free Form RPG is a game-changer, making RPG universally easy to code and maintain. Learn which “old-fashioned” RPG coding patterns to avoid.  Code modularity means breaking up code into smaller pieces for easier re-use.  But how to make existing monolithic RPG code modular ?  Start with a prior analysis of the code, and a gradual implementation of changes – from simple improvement of variable names, through to complex changes, such as pulling out a section of code into a procedure.  Safeguard your work with continual testing, already in place before making large-scale changes to the code.

3rd Part – Scott Forstie takes the subject of modernization down to the database, discussing the options for automated conversion to SQL and the rights and wrongs of a surrogate approach.

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
c/o Pramex International Limited
1 Austin Road West International Commerce Centre
7107B 71/F Tsim Sha Tsui HONG KONG, Yau Ma Tei
Hong Kong
sales-asia@arcadsoftware.com

DevOps Facts & Predictions – Infographic

by Olivier Bourgeois | September 6, 2018

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Infographie - Faits et Prédictions DevOps
DevOps for IBM i White Paper thumbnail

Improve your DevOps skills!

White Paper

This White Paper describes the opportunity, the challenges and the solutions offered by DROPS as you rollout a DevOps strategy in your multi-platform environments.

Download the White Paper

2018-11-28T15:25:32+00:00 Blog|

Secure the missing link in your Application Release process

Secure the missing link in your Application Release process.

Deployment is by far the most critical phase in software delivery. Any incident can have costly consequences on the availability of applications and even your company’s reputation.

Many organizations already automate application deployment, but still run a major risk:  Reliability in Production.

Whatever the technologies you employ – in our webinar, you’ll learn how to:

  • Secure the deployment process
  • Minimize the risk of errors in production
  • Keep operational control over application availability
  • Safeguard against costly downtime

Protect your business’s bottom line. Watch our Webinar!

Demonstration

Whether you run your business on Windows, UNIX, Linux, IBM i (aka iSeries, AS/400) or mainframe z/OS platforms, application reliability in production is a critical and constant concern.

In our Webinar, you will learn how to rapidly:

  • Return to a previous stable application state in the case of error,
  • Rollback your database upgrades,
  • Check the integrity of your deliveries before triggering a deployment,
  • Integrate your entire application portfolio, including software packages,
  • Manage all architectures (Legacy, Web, Mobile, Cloud) with one single tool,
  • Comply with regulations re. separation of roles and responsibilities,
  • Coordinate deployment with other daily operations tasks.

With concrete examples we’ll show how you can complete your DevOps strategy using existing enterprise tools (GitHub, Jira, Jenkins, Ansible, Docker, etc.).

 

Watch the replay

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of consulting experience in software development, Floyd is DevOps Advocate for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps migration and Enterprise Modernization projects, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of CI/CD, Test Automation and Application Lifecycle Management.

Ray Bernardi

Senior Consultant, ARCAD Software

Ray Bernardi is a 30-year IT veteran and currently Senior Consultant for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) covering a broad range of functional areas including enterprise modernization, CI/CD and DevOps.  In addition, Ray is a frequent speaker technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, DevOps, and business intelligence.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
Room 22, Smart-Space 3F – Units 908-915, Level 9, Cyberport 3
100 Cyberport Road – Hong Kong
Phone: +852 3618 6118
sales-asia@arcadsoftware.com

2018-10-26T11:59:37+00:00 On-demand Webinars|

Orchestrate a CI/CT/CD pipeline for IBM i using Git, Jenkins and JIRA

Orchestrate a CI/CT/CD pipeline for IBM i using Git, Jenkins and JIRA

Looking to orchestrate a continuous delivery pipeline for all your IBM i code – RPG, CL, DDS or COBOL – using the same tools as on your open systems?

…Automate the integration, test, and delivery of your RPG changes?
…Share a common source code repository between your IBM i and open-systems developers?
…Ensure that continuous test is an integral part of your CI/CD workflow ?

In our Webinar, we’ll demonstrate how you can achieve all this with an integrated CI/CT/CD pipeline on IBM i using your standard enterprise tools Git, Jenkins and JIRA:

  • continuous integration (CI) and dependency build of RPG, CL, DDS, …
  • continuous “regression” test (CT)
  • continuous deploy (CD) & rollback on error

Simplify your DevOps toolchain across IBM i and open systems.  Watch the Webinar!

Watch the replay

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

Ray Bernardi

Senior Consultant, ARCAD Software

Ray Bernardi is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.  In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, SQL, and business intelligence.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
Room 22, Smart-Space 3F – Units 908-915, Level 9, Cyberport 3
100 Cyberport Road – Hong Kong
Phone: +852 3618 6118
sales-asia@arcadsoftware.com

2018-09-07T16:53:05+00:00 On-demand Webinars|

2018 will undoubtedly mark the advent of the digital era

Dear customer / Dear partner,

2018 will undoubtedly mark the advent of the digital era. All companies have understood that they have to adapt to this new world or risk outright disappearance. The good news is that they have the means and the motivation to do so and many have already targeted their investments in this direction.

Entering the digital era is first and foremost a realization that future users, customers or partners will be those young generations who are entering the job market, infused with digital in their daily lives, and who are revolutionizing all established codes.

The digital age means thousands of new mobile applications that need to interact with the core systems. It’s also thousands of webservices developed and web interfaces with an enriched user experience.

This new mix of technologies and the necessary adaptations in the information system make DevOps an essential strategy for all IT organizations, large and small.

If many companies are already mature in the implementation of their “DevOps journey”, it is often applied only over their new technologies. This is far from the case when we observe their so-called “legacy” systems. The new challenge for them is to extend the DevOps approach across their entire information system. Here again, the adaptation of their IT organization to younger generations. Without that shift, who will maintain these critical applications at the very core of the company’s business?

We live in an exciting era of profound change and opportunity. The strength of Arcad’s enterprise and technology is its ability to be credible to populations that are so different in terms of culture, age and experience. We were the first to integrate within our offering all the tools, open source or not, from the open world and which are already very popular on the market. This approach makes it possible to generalize a DevOps strategy, whatever the technologies and the languages ​​used. It brings credibility to the use of these tools in the legacy world, while trivializing legacy platforms within the entire information system. The transitional phase will probably be long, but at least it gives a very clear strategic direction. 2018 will be, we are convinced, the advent of the “DevOps for legacy” era.

You will find in this newsletter many examples that illustrate my point.

Yours sincerely,

Philippe Magne

Philippe Magne

CEO And Chairman

2018-02-14T16:48:58+00:00 Miscellaneous|

Employee spotlight – Interview of our Indian developers

Discover the interview of developers joining recently Arcad (more…)

2018-11-28T15:24:23+00:00 Blog|

What is Source Code Management?

by Ray Bernardi | February 8, 2018

Source Code Management process
I’m old. Let’s get that established right up front. I have been around longer than sand, or at least that’s how it feels sometimes. You would think someone who has been in the industry for as long as I have would have a simple answer to a simple question. What is source code management? My reply? Well, that depends.

(more…)

2018-11-28T18:16:14+00:00 Blog|

Convert from CA 2E Synon to RPG Free Form, near 100% accuracy

Convert from CA 2E Synon to RPG Free Form, with near 100% accuracy

Applications developed in CA 2E Synon are high value assets running critical business today.  A custom model and rules hold a competitive advantage over and above any standard software package.

Yet many CA 2E Synon applications are inflexible, decades old and Synon skills are in diminishing supply.  Modernizing a Synon application means breaking out of the CASE tool and converting to a modern language.  Where conversions to Java and .NET in the past have failed, a conversion to native Free Form RPG preserves the IBM i architecture and leverages the benefits of the platform.

Learn from a customer case study, how to:

  • Convert near 100% of your CA 2E Synon application to modern Free Form RPG, automatically
  • Generate instantly readable code, accessible to a new generation of developers
  • Reduce the volume of code by a factor of 7 (on average)
  • Generate ILE procedures from macro-instructions for easy maintenance

Free yourself from your 4GL constraints – and benefit from the latest technology on IBM i !

Watch the replay

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

Ray Bernardi

Senior Consultant, ARCAD Software

Ray Bernardi is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.  In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, SQL, and business intelligence.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
Room 22, Smart-Space 3F – Units 908-915, Level 9, Cyberport 3
100 Cyberport Road – Hong Kong
Phone: +852 3618 6118
sales-asia@arcadsoftware.com

2018-09-24T16:52:31+00:00 On-demand Webinars|

Modernization process, application modernization roadmap

by Ray Bernardi | February 2, 2018

Automating IBM i Modernization in 3 steps
It’s 2018 – Have you modernized yet ? If not, you must be waiting for some kind of invitation. Consider this a formal invitation. You need to get with the times.

If you haven’t noticed, over the past few years there have been some significant changes in the IBM i landscape, changes for the better. I used to hear people talking about how the days of the IBM i were numbered, that it was antiquated and that the people working on it were all dinosaurs. That’s simply not true.

(more…)

2018-11-28T15:26:44+00:00 Blog|

Transforming The Art Of Code And The Face Of IBM i

ARCAD has been in business for 25 years, and we have done a lot of technical innovation over those years. We started our business with traditional software change management, on the software change management at that time, which is the combination of having a set of tools to manage developer work and then to transfer from development to test to production. At that time, of course, it was only for OS/400 and then IBM i production platforms. There are some customers with many production machines, but the typical case involves two machines: one for development and test, and the other for production.

(more…)

2018-01-25T09:52:41+00:00 Press Articles|

Continuous Integration/Deployment (CI/CD) for IBM i, using Git, Jenkins and JIRA

How can traditional IBM i development languages, like RPG, CL, DDS or even COBOL, be integrated into the continuous workflows used by open systems teams, with Jenkins, Git and JIRA?

Learn how to bring these typically disparate teams into sync, using the same tools and delivering changes together.

We’ll take you through each phase of the DevOps cycle and see how to accomplish this integration and what the challenges are, using open source tools across both IBM i and open systems. In particular, we’ll demonstrate how you can automate the most complex tasks like:

  • dependency build (CI) of specific IBM i technologies, e.g. ILE and DB2 SQL
  • code quality check (CQ)
  • regression test (CT)
  • continuous deploy (CD) & rollback on error

 

Watch our Webinar to improve your open source tool skills on IBM i!

Watch the replay

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

Ray Bernardi

Senior Consultant, ARCAD Software

Ray Bernardi is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner.  Ray has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.  In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, SQL, and business intelligence.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
Room 22, Smart-Space 3F – Units 908-915, Level 9, Cyberport 3
100 Cyberport Road – Hong Kong
Phone: +852 3618 6118
sales-asia@arcadsoftware.com

2018-09-04T09:47:31+00:00 On-demand Webinars|

GitHub: How Developers Got to Run the World (sic)

Of the world’s most visited websites this month, GitHub rates 59 and climbing. Not bad considering that GitHub’s appeal is limited to software developers, whereas the other popular sites like Google and Amazon have a planetary audience. Is software becoming as important as the written word?

Probably not, but with the power of open source and a distributed software hub, there are ways to shape and improve our world far more quickly than before. GitHub is home to 24M developers and is THE place developers go when they need something.

(more…)

2018-01-30T17:14:47+00:00 Press Articles|

ARCAD Procures Funding To Fuel Expansion

The financial incentives instigating investors to back companies that create software and provide services to the IBM i community continue to raise eyebrows and elicit surprise among those unfamiliar with the innovation taking place in the so-called legacy netherworld.

Another glimpse of this appeared last week when ARCAD Software announced it received a € 2.5 million ($2.95 million in US currency) investment from the European equity firm Alto Invest. ARCAD develops and markets DevOps and modernization solutions for platforms that include IBM i, Unix, Linux, Windows and z/OS. This is the first external funding ARCAD has received during its corporate history that includes more than 25 years in the IBM midrange market.
(more…)

2018-01-30T17:23:32+00:00 Press Articles|

Arcad Software, a world leader in Application Lifecycle Management solutions, announces a € 2.5M fundraising from Alto Invest

Arcad Software, a world leader in Application Lifecycle Management solutions, announces a € 2.5M fundraising from Alto Invest

 

Annecy and Paris, 27 November 2017

Created 25 years ago and present in Europe, USA and Asia, Arcad Software is among the top 3 global vendors specializing in Application Lifecycle Management (ALM) and Enterprise Modernization for IBM i (aka iSeries, AS/400) environments.
(more…)

2018-01-30T17:21:33+00:00 Press Articles|

How Git can be used as the source repository for your native IBMi code? – Part 1

Git started with GitHub and ARCAD on IBM i

by Ray Bernardi | November 14, 2017

I have been experimenting with Git as the repository for IBMi native source like RPG, CLP and so on. Git seems to be the source repository of choice at the moment and using it for native code seems to open up a whole new world to IBMi developers. Imagine being able to branch and merge IBMi code as easily as you do your PHP code. Imagine adopting agile methodologies for native development, builds and deployments. Imagine never having to “check-out” code and jump through those hoops. That’s what Git allows, and it’s really not that hard to accomplish. (more…)

2018-11-28T18:15:43+00:00 Blog|