IBM i DevOps TechTalk – Testing Tools #2

by the experts at ARCAD

Spotify Podcast Badge
Apple Podcast Badge

The topic of this IBM i DevOps TechTalk podcast is “Testing Tools.” It features Jeff Tickner, Ray Bernardi and Alan Ashley as they discuss the importance of testing as we strive to ‘shift left,’ how to get started, methodologies, tools, and how testing can be automated and incorporated into your DevOps workflow. Listen in as the team shares key tips for success:

  • Making testing more efficient with tools
  • Take small steps
  • Shift left – start with Unit testing
  • Code free testing – use JUnit framework
  • Create a process to manage test data
  • ARCAD tools to make this all easier

Listen to the Podcast

Ray Bernardi – Welcome to IBM i DevOps Tech Talk, where we discuss key topics and questions with our Arcad experts. Today’s topic is testing tools.
Hi everyone. My name is Ray Bernardi. I’m with ARCAD Software. I’ve been here for sixteen years. I’ve been in the application lifecycle management space for over 30 years now at this point in time. And I’ve got a couple of experts here with me today. I’ve got Jeff Tickner. Jeff, why don’t you say hello to everybody?

Jeff Tickner – Hi, I’m Jeff Tickner. I’ve been working in change management for over 25 years with a couple of different companies and I do implementation, so I’ve been kind of the hands on, implement and train on the development processes we set up.

R.B. – Thanks, Jeff. Also joining us today is Alan Ashley, why don’t you say hello and tell us why you’re here?

Alan Ashley – I am here as kind of the DevOps representative, the one that begins to show and tell part of this, after a long career over at Big Blue.

R.B. – And you’re more into the testing area and things like that, are you not?

A.A. – Testing and anonymisation and things like that? Yes.

R.B. – All right. Today’s discussion will be on testing tools and efficiency. So why don’t we get started? All right. So I know this seems obvious, but why should I be concerned about testing? Jeff, why don’t you talk about that?

J.T. – My opinion is that everybody does testing, but it’s not necessarily formalized. And part of DevOps is to provide consistency. So you’re testing, let’s make sure you’re testing. Is that right?

R.B. – So it sounds like what you’re saying here is that testing has to be a priority during the development cycle, not just an afterthought. I guess we call that: test centric development. And if I read into that just a little bit further, what you’re saying is that unless testing is automated, you’re not really doing DevOps. All right, So how do I get started to make testing more efficient? I mean, what would be the first thing that I should automate?

J.T. – One thing that I’ve asked people that are interested in getting started with automation is: do you use test cases and do you track the test cases? So everybody’s doing testing at some level. So try and formalize the testing you’re doing now and try and track that testing and see what you can reuse. So there’s your first efficiency: reuse the assets, the artifacts that you ardeer are generating from your testing and then trying.

R.B. – The way to reuse just by capturing testers knowledge some way in some way, shape or form.

J.T. – Yes. So the developers are writing test cases, right now. Are you tracking the test cases and reusing or making the developer write them each time?

A.A. – So Jeff, when you have this test case and you’ve now saved it, would you run it as you develop or run it when at the end when you get ready to put it out to QA, would you rerun it twice?

J.T. – It depends on where you end up. But the first thing is to track your test cases and use keywords and figure out when you should run them in the context of the developer says: I made this change, run this specific test case. Maybe there may be other cases that apply that they are not thinking of.

R.B. – So that’s pretty far left. I mean, if you started working like that, you’d be working in unit testing areas and that’s probably a good place to start. I mean get as far left as possible, right?

J.T. – Yes, That’s right.

R.B. – Then those same tests that you use in unit testing, you can also expand those and can even move those toward regression testing. They probably were usable.

J.T. – You can certainly identify or start with unit tests and use those to identify functional testing areas that you want. Also, a lot of my customers start with what they consider their most important business function to do regression testing through scripting or just we should always run this test when we change this object. And that’s the start of automation.
I always want to run this test when I make this type of change.

R.B. – All right. So what about managing test data? How do I create test data? Alan, why don’t you talk about that?

A.A. – When you start looking at your test data, many people think: I’m just going to copy production when you could have millions and millions of records to bring over and you don’t want to test against that. You need to test against small subsets. And when you have this small subset, it’s it allows you to test your program efficiently.
And when you start looking at some of the tests, like regression testing, your data has to stay consistent, has to stay the same. So you don’t want to refresh every day necessarily because your regression test, your answers would be different from your test.

J.T. – So if you have consistent test data that you ideally can reuse, if the test data drifts, then you can also reuse those test scripts. I was saying in the previous question, what can I reuse? Consistent test data allows me to reuse test scripts and get consistent results.

A.A. – And you know, where does this test that, you talk about? Where does it come from? Well, this fits right into your automation, into your pipeline, into your ETL, the extract transfer load process that you hopefully already have in place. And that’s moving it from point A to point B over to your dev environments.

R.B. – You guys are talking about taking data from production, making the test data, making subsets of test data. What are there any concerns about security with production data here?

A.A. – This is where many people kind of fall by the wayside and don’t think about this. You know, here at ARCAD, we have DOT Anonymizer that can anonymize your data. But the key is it keeps it as useful, reliable data across the board, and that removes any of the security concerns. And it would also help satisfy things like the CCPA of California or GDPR out of Europe.

J.T. – And if you are automating manage your test data, then you can have a static set that you’ve manually anonymized. But then we go back to it’s much harder to refresh it because I have to manually anonymize or sanitize that data. So while we’re talking automation, if you want to refresh test data, you have to automate anonymization.

R.B. – So you guys are talking a lot about automation and you’re starting to mention tools like DOTA anonymizer. What other tools can I use? What about open source? Are tools available there?

J.T. – So tools for testing, we actually use JUnit because JUnit is kind of the standard. It’s an open source tool. Our testing tools, our unit testing tools can plug into it. There’s other open source testing tools for the IBM I, they plug into it. If you look at what testing tools you’re using on other platforms, like I always say, go with the tool you’re already using because you have somebody that understands that tool.
The challenge can be getting them to accept that the IBM i can actually be used with that tool. So always go with what you know.

A.A. – So what’s one of the benefits of I mean, you talk about JUnit and open source. What makes JUnit so important when you start talking about the tools that can do this.

J.T. – It’s the framework. So essentially you want to automate, you want to provide a consistent testing experience and the goal here, the Nirvana is that I am done with a change. And that change is deployed and tested automatically and that implies a pipeline. These open source frameworks tie in to the open source pipeline tools like Jenkins.
So if I want to start out inexpensively, open source is the way to go. If I look at what are the tools I’m using, I’m trying to be driven in the same direction towards them using other platforms, like for my job development development, that kind of thing.

R.B. – How does an RPG guy get the JUnit?

A.A. – That’s funny because you know, so many of the kids today coming out of college no JUnit but your older more seasoned RPG programmers they grew up on the either going back to system 38, System 36. But JUnit It doesn’t make sense to them in many cases. So, in ARCAD we have an AI unit program that in essence takes the JUnit framework.
It is basically JUnit for IBM i in short, and it allows the RPG programmer to work in an environment that he’s familiar with. And then we can use it in AI and you can actually export that out to JUnit so that it can fit into your automation pipeline. So that’s how you get from point A to point B.

J.T. – And that’s the challenge is that when you’re looking at open source tools, that implies that you’re going to have some effort to adapt the open source tool to the IBM i because it wasn’t written with the IBM i in mind. And so fortunately we do have tools for the IBM i, but it’s the tradeoff of what level of effort is it going to require me to integrate these open source testing tools with the IBM i and with each other?
So, JUnit as the framework and then writing that Java two to talk to say RPG unit versus a package that just does it for you.

R.B. – So guys I developed programs for years and a lot of times I had to write programs to test programs. So I mean, instead of writing programs to test your programs, what kind of methodology is here to create tests and how do you start creating tests around things that haven’t even been developed yet? Is that even possible?

A.A. – This is when you kind of get into that test driven development aspect of things. And I was a system admin a long time ago. The worst thing you can do is have developers write tests for their code because they’ll end up coding around to get the proper answer. And so you need a program that works with no code.
How do you test with no code? And we had the tool i Unit which does that. It allows you to start creating tests without having to create the code around the test, and then you can start working on your modules and focus on that and get away from having to do things manually, do what you’re paid to do, develop code.

J.T. – Right? That also means that you can have other resources to work on the testing. And ideally you have QA staff that if they have a no code tool, they can provide more functionality rather than just walking through the test script. And that means we’re getting closer to automation because we have a automated tool.

R.B. – So you guys are talking a lot about unit testing in i Unit. That’s what you’ve been talking about. But after unit testing, what’s next? Where do we go from there?

A.A. – There’s even a before that you could possibly get into some testing. You have code quality testing, you have code security testing, or even maybe even before you get into some of the unit testing. And then at the other end of it, you have regression testing where you’re actually checking the database. Are there changes that you made in the end?

A.A. – And you know, each one of those in and of itself is a long discussion to be had within your organization, within your departments, to say: Do we need to add this to our test? Do we need to start doing this? And by the way, you need to put that into your automation pipeline as well?

J.T. – Functional testing is really important because you can often have unintended consequences. As Alan was saying, you could have somebody kind of work around the test and make sure that the unit test passes, but break the function and not realize that because they’re focused on unit testing. I talked with customers before that will write functional tests, scripted functional tests for their most important business functions, and just run those as a matter of course whenever a change goes into QA.
And that’s a great place, but it doesn’t have a wide range of coverage and we can use IBM’s code coverage tool. They have APIs now, so I can go and see what my code coverage is. If I want a good code coverage, I want to go with a tool that doesn’t require me to write all of these functional tests.
And I also want to look at how I’m running those tests. How do I choose what test to run? If I build a giant functional test suite that takes me 2 hours to run through? Now, I might have great coverage, but it’s not optimized, so I have to be able to track what test to run when I put changes into QA and be efficient about it.
And now you’re really looking at a tool that can provide that information for you, or building your own database, tracking test cases and objects. And we go back to where I started.

A.A. – That sounds like our cross-reference Metadata repository that we used in ARCAD.

J.T. – Yes. Exactly. We just leverage that to make our testing as efficient as possible. It’s certainly possible for somebody to do that on their own as long as they start with that in mind. That’s why that was the first thing I said was you got to start tracking or buy a tool that does the tracking for you.

R.B. – So if you’re talking about testing, then what does the ultimate flow kind of look like?

J.T.– Ideally, you have a pipeline and you drop your changes into that pipeline either by pushing them to a repository or kicking off the promotion process. And that pipeline is doing all of those things we’ve been talking about. Ideally, there’s automated code review, there’s automated unit testing, there’s automated regression or functional testing, and we have consistency. So there’s a lot of effort that goes into that.
In the end, we get optimized speed, but the first thing we get is quality. And that’s a thing that my customers sometimes get surprised by, is that they think it’s going to be magic and that boom, we’re going to have great code coverage right off the bat because it’s all automated. And they wrote a check.
But the test cases have to be built no matter what. You want to prioritize those and you start out getting better quality as soon as you start covering your important business functions.

R.B. – So this is something that can be done in steps. You don’t have to go big right off the bat.

J.T. – Yes. Even if you buy a tool, you still have to prioritize your business functions and what you want to test first. And ideally you’re identifying the gaps in your testing coverage. And again, we integrate with IBM’s code coverage. It’s essentially free. And you could do in your own testing efforts.

R.B. – So talking about the flow and everything you’re talking about, test areas being built, you’re talking about anonymization, the data code quality checks, pipeline lines. I mean, what does ARCAD have to make all of this any easier?

A.A. – So just to start, we’ve mentioned a couple of these so far, but for your code quality, we have CodeChecker, unit testing we have iUnit that we mentioned ties in and is built over JUnit. For our functional or regression testing, you have Verifier which builds your cross-reference as you go through it so it knows which programs and which applications it goes to and it starts to tell you you need to run this.
And you know, earlier we talked about where do you get this data that you’re testing over? Well, we have Extract Anonymizer so you can extract subsets of your data and then once that data is staged, you can anonymize it for audit and security purposes. And now you have a small subset of functional real data that’s protected.

R.B. – All right. So automation, I guess, is a key theme here. So can any of these tools that you’re mentioning work with a pipeline?

A.A. – You’ll hear throughout many of our discussions it’s all about automation. And for example, we mentioned i Unit and how it’s built on the JUnit framework. Well, it has a plug in straight into Jenkins, which we use here in-house. But there’s so many different options out there that you could use for your pipeline.

J.T. – We kind of standardized on Jenkins initially because it was the most popular pipeline tool and we have Jenkins plugins for all our tools. But now that we’re actually implementing this out in the real world, there’s lots of other pipeline tools out there. Even the big ACM vendors have some form of pipeline functionality. And so you’re going to find yourself looking at other ways to connect your automation in with a pipeline.
Again, look at pipeline tool you’re already using in your organization because you probably have somebody who understands how to script that pipeline. And so if you’re facing kind of a homegrown integration or not Jenkins essentially, then having somebody that understands YAML or the scripting or you could even be a simple as using SSH to connect to the IBM i to issue commands from the pipeline and get some very simple feedback. It failed. It didn’t fail. Back to the pipeline.

R.B. – All right, guys, we’re going to run out of time here pretty quickly. So Alan, Jeff, I’d like to thank you for being here today and answering some questions.

A.A. – Thanks. I always enjoy these talks.

J.T. – I am very passionate about the automation because I’m lazy and I want to save work.

R.B. – We could go on for hours about this, but we’ll end it here. Let me summarize. Today we talked about testing tools and making testing more efficient. We discussed how this doesn’t have to happen all at once. You can take small steps now to make testing more efficient. We talked about how unit testing is a great place to start.
That shifts you as far left as possible. You’ll find defects. Much earlier we talked about making testing code free and using JUnit as a standard framework. We talked about having a good process to manage your test data and how i Unit and the ARCAD tools can save you time and do a lot of this work for you.

R.B. – Thanks for listening. For more information, please visit our website at www.arcadsoftware.com.

Our Hosts

Alan Ashley

Alan Ashley

Solution Architect, ARCAD Software

Alan has been in support and promotion of the IBM i platform for over 30 years and is the Presales Consultant for DevOps on IBM i role with ARCAD Software. Prior to joining ARCAD Software, he spent many years in multiple roles within IBM from supporting customers through HA to DR to Application promotion to migrations of the IBM i to the cloud. In those roles, he saw first hand the pains many have with Application Lifecycle Management, modernization, and data protection. His passion in those areas fits right in with the ARCAD suite of products.

Ray Bernardi ARCAD

Ray Bernardi

Senior Consultant, ARCAD Software

Ray is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner. He has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps.

Jeff Tickner

Jeff Tickner

DevOps Consultant, ARCAD Software

Jeff Tickner is CTO, North America for ARCAD Software. He has worked in the Application Lifecyle Management sector on the IBM i for 22 years. He leads client engagements in product implementation and training, including ARCAD for DevOps, ARCAD Transformer for application modernization, and ARCAD Verifier test automation. Jeff lends his expertise in the DevTestOps field as frequent speaker at conferences around the world.