»Olivier Bourgeois

About Olivier Bourgeois

This author has not yet filled in any details.
So far Olivier Bourgeois has created 11 blog entries.

5 keys to IBM i Modernization Success

by Alexandre Codinach | september 27th 2018

Successful IBM i application modernization projects are those that find the right balance between IT and business objectives.

These objectives can take the form of:

  • Improved system maintainability, flexibility, and scalability
  • Adoption of new tools and methods of development
  • Reduced risks and operational costs
  • Reduced time to market
  • Improved customer satisfaction and productivity
  • Easier hiring of skilled resources

Whatever the reason for a modernization project for a legacy system like IBM i, it is important to identify some key points for the success of the project:

1. Obtain backing from general management

Whatever its scope, a modernization project is a business project that goes beyond IT issues alone.  The stakes relate to the performance of the company, its development and sometimes its survival, although the subject matter may be somewhat obscure to the layman.

Tip:  “Popularize” the modernization project by conveying the business value associated with the technical gain.  Translate the technical argument into a business argument, and weigh any short-term impacts against the Return on Investment at the end of the project.  Secure management backing right at the start through an understanding of the business value gained from modernization and the risk of inaction.

2. Define an overall modernization roadmap

In such a project, not everything can, or must, be modernized.

We are dealing not with modernization but modernizations (in plural). The approach must not be “manichean”. Techniques like modernization of the existing system, reengineering and / or software packages are not necessarily incompatible.

The “silver bullet” from new to modern does not exist. Complete renewal within 3 years is a fantasy. Modernization is a continuous, staged process, which must interleave quick wins and longer term goals.

Tip:  Plan regular communication points so that everyone in the organization visualizes and understands the issues. Including resources from the business side and defining clear business indicators will help this process.

3. Involve staff early, to include all impacted parties

Just like any IT project, even when it outsourced, modernization consumes staff resources.

Over and above the technical side of the project, it is important to take into account an overall change management process within the organization, from IT right through to the business users whose interaction with the application can be changed significantly in such a way as to impact their daily work.

Tip:  Involve impacted staff right from the analysis phase of the project, to participate in the decision making process and be the first lever of communication with the teams.

4. … Secure through automation

As work is underway, business must go on: modernizing must NOT mean putting projects on hold and ceasing to deliver new features needed by the business lines.

Automating your application lifecycle reduces risks and increases the productivity of IT staff by allowing them to focusing on value-added work. In the end, this means it will be easier to allocate resources.

Continuous integration and deployment (CI / CD) will help you reduce development times and secure the reliability of applications in production.

5. Test for non-regression

Often it is only internal teams and sometimes even only the business users that are able to provide useful scenarios for regression testing. Prepare these scenarios carefully before the project.

You must be able to verify that the modernization process, however wide-reaching, has not resulted in unexpected side effects that could degrade the operation of your application.

Run these tests again during the modernization project and check for errors.

Finally, if you do use external teams for all or part of your project, ensure that a non-regression guarantee is included.

It is vital to ensure that the system will continue to meet requirements.

Tip:  Benefit from the investment in testing needed for this project to bring long term improvements in your company’s testing process.

Conclusions

  • Communicate on, and build support for, your project
  • When defining the scope of your project, run a functional audit in addition to the technical audit
  • Anticipate the staffing needs to complete your project
  • Secure the project through automation, to ensure application availability for your end users
  • Check that the system continues to meet requirements using automated regression testing
Modernization as a Service White Paper

Modernization as a Service

White Paper

This paper examines the problems associated with maintaining often mission critical IBM i (aka iSeries, AS/400) legacy applications on IBM Power systems.

Download the White Paper
Enterprise Modernization for IBM i

Enterprise Modernization for IBM i

Brochure

“Through enterprise modernization, IBM i organizations can leverage their competitive advantage and R&D investment on a uniquely reliable platform strategically positioned for mobile and cloud technologies into the future.”

Read the Brochure

Alexandre Codinach

Alexandre Codinach

VP Sales and Operations Americas

Alexandre Codinach has 30 years of IBM i experience, both technical and managerial, with specialized expertise in the field of IBM i modernization.  With a 360 degree view of IBM i, Alexandre has excelled in many roles, including application architecture, project management, pre-sales and consulting.  As ARCAD COO, his in-depth knowledge of IBM i technology and ability to coordinate large, complex IBM i projects on an international scale have made him a trusted advisor in the rollout of ARCAD’s “Modernization as a Service” projects worldwide.

2018-12-06T09:47:03+00:00 Blog|

Anonymize your test data to prevent a data breach

Anonymize your test data to prevent a data breach

In our previous webinar, we covered how Test Automation is an integral component of the DevOps and agile methodologies. Yet for testing to be effective, you need realistic test data available. A central issue is that this data often comes from production.

This puts development shops particularly at risk of a data breach.

How to eliminate risk and maintain test quality?  Integrate data masking into the heart of your DevOps cycle.

Our Webinar will demonstrate how easy it is to implement high performance data anonymization across any DBMS.

Watch Now!

Watch the replay

(more…)

The rise of Enterprise DevOps: solving the IT silo challenge

By Olenka Van Schendel | October 23rd 2018

Silos

In 2018 the enterprise IT silo problem still persists.  The disconnect between Digital initiatives and Legacy development continues to drain IT budgets and allows inconsistent deliveries to reach production.  Errors detected at this point have a direct business impact: the average cost of a major incident in a strategic software application in production per hour is 1 M$, that’s tenfold the average cost of a hardware malfunction per hour (*).  And it’s estimated that 70% of errors in production are simply due to deployment errors, and only 30% due to faulty code.  Yet CIO’s responsible for today’s diverse IT cultures are lacking visibility and control over the software release process.

What solutions are emerging?  Since the last Gartner Symposium, we are seeing Release Management technologies and DevOps converge.  Enterprise DevOps is coming of age.

As a mainstream movement, the DevOps community is assuming the operational reponsibility that comes with success. The agility of “Dev” tackles the constraints and corporate policies familiar to “Ops”.

From CI/CD to Enterprise DevOps

IT environments today are comprised of of a complex mixture of applications each one made up of potentially hundreds of microservices, containers, and multiple development technologies – including legacy platforms that have proven so reliable and valuable to the business that even in 2018 they still form the core of many of the world’s largest business applications today.

Many CI/CD pipelines have done a fair job in provisioning, environment configuration, and automating the deployment of applications. But they have so far failed in giving the business the answers to enterprise-level questions around business the answers to enterprise-level challenges around new regulations compliance, corporate governance and evolving security needs.
What are called DevOps pipelines today are often custom-scripted and fragile chains of disparate tools. Designed primarily for cloud-native environments, they have successfully automated a repeatable process for getting applications running, tested and delivered.
But most are lacking the technology layer needed to manage legacy platforms like IBM i (aka iSeries, AS/400) and mainframe z/OS, leaving a “weak link” in the delivery process.  This siloed approach to DevOps tooling carries the business risk of production downtime and uncontrolled cost.

Solutions are emerging. Listen to SpareBank1‘s experience for a recent example. The next phase in release management is already with us. Enterprise DevOps offers a single, common software delivery pipeline across all IT development cultures and end-to-end transparency on release status.  This blog explains how we got here.

What has been holding DevOps back? Bimodal IT holds the key.

The last few years have seen the emergence of “Bimodal IT“, an IT management practice recognizing two types – and speeds – of software development, and prescribing separate but coordinated processes for each.
Gartner Research defines Bimodal IT as “the practice of managing two separate but coherent styles of work: one focused on predictability; the other on exploration”.
In practice, this calls for two parallel tracks, one supporting rapid application development for digital innovation projects, alongside another, slower track for ongoing application maintenance on core software assets.

Bimodal IT

According to Gartner, IT work styles fall into two modes. Bimodal Mode 1 is optimized for areas that are more predictable and well-understood. It focuses on exploiting what is known, while renovating the legacy environment into a state that is fit for a digital world. Mode 2 is exploratory, experimenting to solve new problems and optimized for areas of uncertainty. These initiatives often begin with a hypothesis that is tested and adapted during a process involving short iterations, potentially adopting a minimum viable product (MVP) approach. Both modes are essential in an enterprise to create substantial value and drive significant organizational change, and neither is static. Combining a more predictable evolution of products and technologies (Mode 1) with the new and innovative (Mode 2) is the essence of an enterprise bimodal capability. Both play an essential role in the digital transformation.
Legacy systems like IBM i and z/OS often fall into the Mode 1 category. New developments on Windows, Unix and Linux typically fall into Mode 2.

The limits of CI/CD

Seamless software delivery is a primary business goal. The IT industry has made leaps and bounds in this direction with the widespread adoption of automated Continuous Integration/Continuous Delivery (CI/CD). But let us be clear about what CI/CD is and what it is not.
Continuous Integration (CI) is set of development practices driving teams to implement small changes and check in code to shared repositories frequently. CI starts at the end of the code phase and requires developers to integrate code into the repository several times a day. Each checkin is then verified by an automated build and test, allowing teams to detect and correct problems early.
Continuous Delivery (CD) picks up where CI ends and spans the provision-test-environment, deploy-to-test, acceptance-test and deploy-to-production phases of the SDLC.
Continuous Deployment extends continuous delivery: every change that passes the automated tests is deployed to production automatically. By the law of DevOps, continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.
The issue is that most CI/CD pipelines are limited in their use to the cloud-native, so-called new technology side of the enterprise. Enterprises today are awaiting the next evolution, one of a common, shared pipeline across all technology cultures. To achieve this, many organizations need to progress from a simple automation to business release coordination, or orchestration.

DevOps facts & Predictions Infographic

DevOps Facts & Predictions

Infographics

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Discover the Infographic

From Application Release Automation (ARA) to Orchestration (ARO)

Application release automation (ARA) involves packaging and deploying an application/update/release from development, across various environments, and ultimately to production. ARA tools combine the capabilities of deployment automation, environment management and modeling.
By 2020 Gartner predicts that over 50% of global enterprises will have implemented at least one application release automation solution, up from less than 15% in 2017. Approximately seven years old, the ARA solution market reached an estimated $228.2 million in 2016, up 31.4% from $173.6 million in 2015. The market is continuing to grow at an estimated 20% compound annual growth rate (CAGR) through 2020.
The ARA market is evolving fast in response to growing enterprise requirements to both scale DevOps initiatives and improve release management agility across multiple cultures, processes and generations of technology. We are seeing ARA morph into a new discipline, Application Release Orchestration (ARO).
One layer over ARA, Application Release Orchestration (ARO) tools arrange and coordinate automated tasks into a consolidated release management workflow. They further best practices by moving application-related artifacts, applications, configurations and even data together across the application life cycle process. ARO spans cross-pipeline software delivery and provides visibility across the entire software release process.
ARO forms the cornerstone of Enterprise DevOps.

Enterprise DevOps: Scaling Release Quality and Velocity

Enterprise DevOps is still new, and competing definitions are appearing. Think of it as DevOps at Scale.
Like Bimodal IT, large enterprises use DevOps teams to build and deploy software through individual, parallel pipelines. Pipelines flow continuously from development to integration and deployment iteratively. Each parallel pipeline use toolchains to automate or orchestrate the phases and sub-phases of the Enterprise DevOps SDLC.
At a high level the phases in the Enterprise DevOps SDLC can be summarized as plan, analyze, design, code, commit, unit-test, integration-test, functional-test, deploy-to-test, acceptance-test, deploy-to-production, operate, user-feedback.
The phases and tasks of the ED-SDLC can differ within each pipeline or there can be a different level of emphasis on each phase or sub-phase. For example, in bimodal mode 1 on a SOR the plan, analyze & design phases may be of greater importance than in bimodal level 2. In bimodal mode 2 on a SOE the frequency of the commit, unit test, integration test and functional test may be emphasized.
Risk of deployment error is high in enterprise environments because toolchains in each pipeline differ, and dependencies exist between artifacts in distinct pipelines. Orchestration is required to coordinate the processes across the pipelines. Orchestration equates to a more sophisticated automation, with some built in intelligence and an ultimate goal to be autonomic.

How to transition Legacy systems to DevOps?

In response to the challenges of Bimodal IT, we have reached a point where classic DevOps and Release Management disciplines converge.
For over 25 years Arcad Software has been helping large enterprises and SMEs improve software development through advanced tools and innovative new techniques. During this time, we have developed deep expertise in legacy IBM i and z/OS systems. Today we are recognized by Gartner Research as a significant player in the Enterprise DevOps and ARO space for both legacy and modern platforms.
Many ARO vendors assume greenfield developments on Windows, Unix and Linux and hence legacy systems become an after-thought. ARCAD is different; we understand the need to get the most from your companies’ investment in legacy systems over the past decades, and also the demands and challenges of unlocking the value within these legacy applications.  ARCAD can ensure you can offer your application owners and stakeholders a practical, inclusive step-by-step solution to deliver both DevOps and ARO for both new and legacy applications vs. an expensive and risky rip-and-replace project.

Leveraging existing CI/CD pipelines

There are a huge number of tools available to organisations to deliver DevOps today. Tools overlap and the danger is “toolchain sprawl”. Yet no one tool can address all needs in a modern development environment. It is essential therefore that all selected tools can easily integrate with each other.
The ARCAD for DevOps solution has an open design and integrates easily with standard tools such as Git, Jenkins, JIRA, ServiceNow. It is capable of orchestrating the delivery of all enterprise application assets, from the most recent cloud-native technologies to the core legacy code that underpins your business.

ARCAD has a proven methodology to ensure we leverage the value in your Legacy applications and avoid a rip-and-replace approach.  ARCAD solutions extend and scale your existing DevOps pipeline into a frictionless workflow that supports ALL the platforms in your business.

Modernizing your IT assets

If the future of legacy application assets is your concern, then complementary ARCAD solutions can automate the modernization of your legacy databases and code – increasing their flexibility in modern IT architectures, and making it easy to hire younger development talent and ensure the new hires can collaborate efficiently with older legacy team members.

With 25 years of Release Management experience working with the largest and most respected Legacy and Digital IT Teams across the globe, ARCAD Software has built in security, compliance and risk minimization into all of our offerings. This is exactly the place that DevOps is headed.

(*) Source: IDC

White Paper Enterprise DevOps

Enterprise DevOps White Paper

This paper attempts to debunk competing DevOps concepts, terminologies and myths in order to help make the path forward clearer and more practical.

Download the White Paper

SpareBank1 Case Study

Success Story SpareBank1 ARCAD for DevOps

SpareBank1 drives rapid development cycles on the IBM i, reducing costs of environment management & compliance by 70%

Read the story

Olenka Van Schendel

Olenka Van Schendel

VP Strategic Marketing & Business Development

With 28 years of IT experience in both distributed systems and IBM i, Olenka started out in the Artificial Intelligence domain and natural language processing, working as software engineer developing principally on UNIX. She soon specialized in the development of integrated software tooling including compilers, debuggers and source code management systems. As VP Business Development in the ARCAD Software group, she continues her focus on Application Lifecycle Management (ALM) and DevOps tooling with a multi-platform perspective including IBM i.

2018-11-28T12:52:33+00:00 Blog|

The evolution of DevOps

By Marc Dallas | October 9th 2018

DevOps practices have evolved in recent years in many organizations seeking to respond more effectively to their business challenges.
While DevOps previously focused primarily on IT services, it now extends across the entire enterprise, impacting processes and data flows and driving deep organizational changes.

DevOps, above all a management of change

Organizations that have embraced DevOps either to the full or even partially can already testify that this approach carries a significant ROI.
Many others have explored and come close to DevOps but have not yet taken the final step.
The main reason for latency is that a DevOps transition goes beyond the adoption of new tooling and into people and process, and most importantly it requires a careful management of change.
Indeed, DevOps is not just about choosing the right automation solution. It requires an accompanied transition, wherein lies the role and responsibility of the solution vendor. In a DevOps project, levels of maturity and understanding differ between organizations. A DevOps solution provider therefore has a duty to advise and support in the management of change and should add value to the project beyond a simple automation. Company specifics must be taken into account, and in particular, the scope and diversity of development cultures and technology platforms contained in the application portfolio. Without this, a DevOps project has no chance of success.

The emergence of DevSecOps and BizOps

The emergence of these new terms is directly related to the “complicated” relationship between Development and Operations.
Over a decade ago, development teams had already adopted mainstream agile methods and were releasing smaller software increments faster and more frequently, while operations – upholding their corporate constraints around application availability and compliance – became an apparent bottleneck in the process. To keep software development cycles fluid and deliver updates to the end-user at the speed of the business, operations had to follow this same agile trend.

The DevOps movement held the key. By enhancing communication, in such a way as to recognize and respect the constraints of each different department, we have transitioned into a dialogue, exchange and a set of processes that meet the needs of each profession and integrate their respective constraints in order to collaborative effectively. This is the essence of what is meant by DevOps.
The appearance of these new and related terms DevSecOps and BizOps is simply evidence of the extension of this level of communication to all departments in a company, a progression in business change.

DevSecOps, for example, aims to enhance security by integrating it early in the application development process. We could add other departments into the chain.
Above all, this means that today companies are realizing that there is a need to have a wider software supply chain which, at each link in the chain, integrates the same principles exemplified by DevOps.

BizOps is a more generic term. It describes an extended chain between business and operations. There is a contraction that we could finally call “BizDevSecOps”.
BizOps involves strategic and operational management. Indeed we should extend the term further than Ops today, as far as users (BizUsers).
We are reminded of terms like as BtoB or BtoC, except for the fact that with DevSecOps and BizOps we embark on a change in internal organization, necessary for the company to thrive. We retain a level of granularity in tasks to allow focus on solving problems in a particular area. It’s about defining and executing all the required actions and automating them in a continuous delivery environment.
This is the idea behind Release Coordination, right the way from the business strategy to the provision of new releases to the end-user.

DevOps facts & Predictions Infographic

DevOps Facts & Predictions

Infographics

DevOps adoption is growing faster than ever. Check out our infographic to discover the latest DevOps predictions, and how this agile corporate culture improves efficiency in all lines of business!

Discover the Infographic

The challenges of Enterprise DevOps

The concept of Enterprise DevOps elevates DevOps into a business strategy, a process that adds value to the organization, not just IT.
The issues in terms of identification, validation of releases between different services, causes of bottlenecks, decision times, implementation or delivery durations, if they are examined at a DevOps scale, can be an area for experimentation. We will then be able to extend this inter-department cooperation across to the entire company, which will allow de facto to increase the overall Return on Investment.
And this is the challenge of Enterprise DevOps: that the entire company becomes aware of the added value brought by this change of collaboration between services.
All this microscopically managed work between Dev and Ops will then be implemented on a macroscopic scale across the entire enterprise chain (from the strategic decision to the end user).

The question of DevOps for Database

Although it is not new, the consideration of data in DevOps is gaining momentum.
In order to save time and reduce development effort, the concept of parameterizing data (whatever the data type, structure and the underlying data management technology) was introduced in order to modify program behaviour depending on specific data entered.
Parameter data therefore has an impact on the behavior of program execution. As such, these data actually belong to the field of development and operation of the application.

Generally, as the data volume remains low, typically very basic processes are used for the transfer of parameter data to production.
These elementary processes therefore do not usually cater for the rollback of data, or the identification of the version number of the installed system – capabilities that are considered low priority as the volume of data is relatively small.
Yet the critical nature of parameter data makes these processes in reality very important.
By underestimating their importance, we introduce a weak link in the quality chain, and run the risk of an incident in production that can cause huge financial losses, but also a loss of confidence in the deployment process.
It is therefore vital to not focus solely on the frequency and scale of deployment, but also on the criticality of the data that is being deployed.
Parameter or configuration/settings data must follow the same quality chain as the applications themselves, as is the promise of “DevOps for Database“.

Conclusion

  • DevOps is not just about process automation, it involves a true management of change
  • The terms DevSecOps and BizOps reveal that companies now contend the need to have an enterprise-wide software supply chain
  • The added value of inter-department collaboration is realized across the wider enterprise
  • Often critical data must follow the same quality chain as the applications

White Paper « DROPS for DevOps »

DROPS for DevOps White Paper

This White Paper describes the oppportunity, the challenges and the solutions offered by DROPS as you rollout a DevOps strategy in your multi-platform environments.

Download the White Paper

Systeme U Case Study

Système U Success story

Systeme U cuts application deployment costs by 40% using DROPS on IBM i & Linux

Read the story

Marc Dallas

Marc Dallas

R&D Director

Software Engineering degree from the Integral International Institute, Marc started his career in 1994 as Analyst Programmer at Nestle Cereal Partners, and was appointed Product Manager at ADSM Software, prior to joining ARCAD Software in 1997.

2018-11-28T13:03:37+00:00 Blog|

5 most common questions about data anonymization

by Maurice Marrel | september 13th 2018

GDPR, data privacy, data protection regulations have raised more questions around the handling of data than ever before. We asked our DPO and anonymization expert, Maurice Marrel, to answer some of the most common questions facing our customers today.

1. What is the role of anonymization in GDPR compliance?

In recent years, “digital everywhere” has dramatically transformed the flow of data.
Production data is copied into test, QA or pre-production environments, and exposed to the eyes of testers, receivers or unauthorized developers on machines much less protected than production environments.
Many files are also shared with external partners, who often only require a small part of the data actually transferred.

This personal data must be protected from leaks and other indiscretions.
In response, specific new legislation has emerged, such as the GDPR in Europe.

These new regulations oblige the desensitization of confidential data.
Desensitization means transforming the data, using non-reversible algorithms.
However, the data must remain usable. A test user must still see on the screen, in the last name field, a modified last name that “looks like” a last name.
Similarly, the domain must remain the same: an IBAN / RIB or a social security number must stay valid and compatible with the requirements and validation checks made by applications to allow the tests to actually run.
These same constraints must still apply even in the case of data redundancy in legacy databases, or across multiple database management systems.
These concerns must all be taken into account by any anonymization solution.

2. Anonymization and pseudonymization – how do they differ?

Anonymisation ensures that the data can never be retrieved by any means, contrary to pseudonymization.

In a test environment, even if the machines are secure, it is the developers, testers, QA staff, and training personnel who have direct access to the data. It is therefore imperative to anonymize or pseudonymize the data upstream.
In the case of a pseudonymization, the data can optionally be kept encrypted in software metadata, so it can be retrieved individually on request, and only to authorized persons. The old data in this case are preserved. This can be useful for example to check specific, one-off problems in a test environment.

Pseudonymization is often the only solution that allows normal operation of applications and the completeness of test scenarios.
On the other hand, it is a potentially reversible technique due to the identification keys that may not be replaceable for technical reasons. Pseudonymization can leave identifiable data in place, such as customer numbers, which are sometimes the only link between data storage technologies (DBMS, files). Combining the data with each other can help malicious organizations statistically guess some of the original data.

3. Personal vs. sensitive data – what does this change for data handling?

According to the CNIL, personal data is “any information relating to a natural person who can be identified, directly or indirectly”. Whereas sensitive data refers to “any information that reveals racial or ethnic origins, political, philosophical or religious opinions, trade union membership, health or sexual orientation of a natural person”.

But this differentiation of data can be confusing.
The most important point is to identify the data to be anonymized. The goal is to prevent anyone being able to find links between these data. For example, you are unable to modify health status type data if the corresponding first and last names are anonymized.
For example, you are unable to modify health status type data if the corresponding first and last names are anonymized.

Anonymization therefore utilizes algorithms that apply to all types of data.

4. How can I safeguard IT performance when introducing anonymization?

It is important to not only consider performance alone, but also take security into account.
Anonymization means an additional process, and will therefore necessarily have an impact on performance. However, if it is well planned for, and its scope and requirements are well defined, any impact will be minimized. And on average, only about twenty percent of data needs to be anonymized.

In general, data when anonymized, will be retrieved directly from a production environment for insertion into a test environment. But even if users (developers, testers etc.) do not have access during processing, test environments are usually less protected.
The ideal solution, in this case, will be to make a copy of the production database. This will allow the first instance to remain available while the other is being anonymized.
The anonymized data will then be dispatched to the relevant test, QA and training environments.
Another solution is to isolate a copy of the production environments in test machines while limiting access during the anonymization, then distribute onto the test environment.

5. How can I identify which data should be anonymized?

Typically, anonymization is required for test environments.
A good knowledge of the overall scope of the database is important, because it will help in assessing which types of data will need to be anonymized.
It is also important to consider how specific data relate to each other, as some data are inseparable.
To assist the administrator, the discovery of the data eligible for anonymization must be as automated as possible, using algorithms catering for the various types of data.

But in some cases, anonymization is needed for production environments. This is especially the case with the “right to be forgotten“, which has been considerably reinforced by the GDPR.
Indeed, anyone residing in the European Union and whose organization holds personal data may take control over his/her data.
But in many cases, simply deleting this data would have a significant impact on other data. In such cases anonymization is therefore a better solution as it renders personal data inaccessible, while preserving the usability of data to allow normal application operation and consistency of results.
Take the example of an online commerce site. When a product is sold, out-of-stock, money-in, or parcel-delivery data are necessary for the the business to operate and cannot be removed. However, the name of the buyer, his address or banking data can be.
The right to be forgotten, whether it results from a specific request or a regulation on the conservation of historical data, is the most common reason for anonymizing a production environment.

Conclusions

  • Anonymization meets the requirements of the GDPR because it transforms data irreversibly, while retaining its usability
  • Anonymization concerns all data, personal or sensitive
  • If the anonymization scope and requirements are well defined and planned ahead, any impact on performance will be minimized
  • Anonymization may be necessary in a production environment in response to “right to be forgotten” requirements

White Paper « Protection of personal data »

Protection of personnal Data - White Paper thumbnail

This document details the fundamentals of the GDPR, and recommendations as to how to become compliant before the 2018 deadline.

Download the White Paper

DOT Anonymizer Datasheet

DOT-Anonymizer Datatsheet

The anonymization of personal data is an ever-sensitive subject. This document will show you how DOT Anonymizer helps you protect the confidentiality of your test data.

Download the Datasheet

Maurice Marrel

Maurice Marrel

Senior Solutions Consultant, DOT Software

Maurice Marrel has over 20 years experience on IBM i (and its predecessors) remaining actively involved in modernization projects at the forefront of technology on the platform. Now specializing in technical pre-sales and training for ARCAD’s solutions for Enterprise Modernization on IBM i, Maurice has a wide-ranging technical background including IT management in aerospace and energy industries, and project leadership in several technology sectors including software development tooling.

2018-11-29T18:42:03+00:00 Blog|

3 steps to zero-risk Modernization on IBM i

3 steps to zero-risk Modernization on IBM i

Starting a modernization project on IBM i can be a daunting prospect, faced with the many options out there: Webservices? N-tier? Web, mobile? Java, .NET?

Join us for our three-part Modernization Webinar Series on September 18th, 25th and 27th with Barbara Morris, Scott Forstie and Tim Rowe, and learn how to get from A-Z with minimum risk !

Featuring actual case studies, our series is structured around a 3-step approach to risk-free modernization :

  • Step 1: Analyze – Where do I start to modernize? What are my choices?
  • Step 2: Structure – Laying a secure foundation with a structured DevOps process
  • Step 3: Transform – Automating the conversion of RPG source code, database and UI

Watch the Replay

Our special Guests

1st Part – Tim Rowe takes a tour of the very latest technology options on IBM i, with the goal of “Making IBM i normal !”.  Tim guides us into making the right choice of development language, database, method and tooling using the “best tool for the job”, taking performance and data integration into account.  Assess the use of open source tools like Git and Jenkins in an enterprise DevOps setting.   Learn the latest in connectors including MQ, JDBC, ODBC, REST and SQL Services…  A round trip of the “art of the possible” on IBM i !

2nd PartBarbara Morris proves that Free Form RPG is a game-changer, making RPG universally easy to code and maintain. Learn which “old-fashioned” RPG coding patterns to avoid.  Code modularity means breaking up code into smaller pieces for easier re-use.  But how to make existing monolithic RPG code modular ?  Start with a prior analysis of the code, and a gradual implementation of changes – from simple improvement of variable names, through to complex changes, such as pulling out a section of code into a procedure.  Safeguard your work with continual testing, already in place before making large-scale changes to the code.

3rd Part – Scott Forstie takes the subject of modernization down to the database, discussing the options for automated conversion to SQL and the rights and wrongs of a surrogate approach.

Presenters

Floyd Del Muro

Business Development Manager, ARCAD Software

With 26 years of experience on the IBM midrange platform, Floyd is Business Development Manager for the ARCAD Software group.  In his role at ARCAD Software, Floyd has been extensively involved in the management of DevOps and Enterprise Modernization projects on IBM i, from planning stages through to delivery.  Drawing on his experience in managed services and the introduction of agile methods, Floyd is a trusted advisor and speaker on the subjects of DevOps, Test Automation and Application Lifecycle Management.

About ARCAD Software

CreaARCAD Group logoted in 1992, the ARCAD Software group is leading international vendor in integrated DevOps and Enterprise Modernization solutions, with subsidiaries in Europe, USA and Asia, supporting 1000+ installations across 35 countries. ARCAD solutions are distributed by IBM worldwide.  ARCAD’s DevOps technology is positioned in the 2017 Gartner Magic Quadrant for Application Release Automation (ARA).

North America – ARCAD Software Inc.
1 Phoenix Mill Lane, Suite 203
Peterborough NH 03458
Toll free: 800.676.4709
Phone: 603.371.9074
sales-us@arcadsoftware.com
Europe – ARCAD Software
55, rue Adrastée – Parc Altaïs- F-74650 Chavanod/Annecy
Phone: +33 450 578 396
sales-eu@arcadsoftware.com
Asia – ARCAD Software Asia
c/o Pramex International Limited
1 Austin Road West International Commerce Centre
7107B 71/F Tsim Sha Tsui HONG KONG, Yau Ma Tei
Hong Kong
sales-asia@arcadsoftware.com