Where is my Analysis Tool?!

Categories: Our Thoughts | Author: Scott | Posted: 15/05/2016 | Views: 375

In a recent engagement, we were faced with a little bit of a conundrum – a limited to non-existent tools budget versus a requirement to performance test a SAP-NWBC (NetWeaver Business Client, what was once called SAP-Web) deployment with volumes of up to 4000 concurrent users. Usually when dealing with SAP, LoadRunner is one of the go-to tools on the market. Of late however, there has been a subtle but significant shift towards Apache Jmeter with SAP providing guidelines and a few handy hints on how to get the tool and NWBC to play nice with each other. This nod towards the open source alternative by a large Tier 1 provider does not in my opinion herald the beginning of the end for commercial tools such as HP Load Runner but it does indicate that perhaps there are alternatives available to help skirt some of the costs involved in performance testing.

 

In New Zealand for instance, a significant amount of providers who sell performance testing services also have partnerships with commercial tools vendors – so when you buy the service, you are probably going to be looking at buying a tools license too. Most tools are licensed based on the number of virtual users you require. If you get that wrong, then you’re either going to undercook your test or be paying significant additional costs for virtual users that you are not going to use.

 

So getting back to the initial problem. There is no tool in the commercial space that is going to fit the budget constraints for the number of users that we need to simulate. There are however multiple options available in the open source space. If I go the open-source route though, am I going to be missing out some magic secret source that only a big commercial player can provide? Is there some delphic wisdom built into a commercial package that is going to help me diagnose a critical performance issue?

 

The answer to each of those questions is ‘No’, and this in my opinion is where some of the myths around Jmeter (and probably all open source tools in this arena) come into play. The main one of these myths is that Jmeter has poor analysis capabilities especially when compared to a big commercial package such as LoadRunner. So let’s be blunt: Jmeter has virtually no analysis capability. You can save the test data from the run and get Jmeter to draw you a graph but that is generally about it. In a similar vein though, I also don’t believe that LoadRunner has fantastic analysis capabilities either. It saves test data from the run and it can also aggregate collected systems metrics from the system under test and draw you a graph…

 

The use of the word ‘Analysis’ with respect to performance testing tools is in my mind at best ambiguous and at worst misleading. Analysis as defined formally is a “Detailed examination of the elements or structure of something” (courtesy of the Oxford dictionary). Being able to have the tool aggregate results from the run and then selecting ‘File->Generate Report’ is not analysis. It is providing a visualization (which may or may not be valuable) of the data to help you as the engineer identify patterns and determine where the performance problems are. The data provided by the tool is also limited to what you ask it to collect. This generally does not include any of the in-depth digging through various logs to marry up events on the backend with observations from the front end. With junior engineers the tool generated ‘reporting as analysis’ has almost become de rigueur with reports being sold by weight rather than by content.

 

Don’t believe me? As an example a recent report that I read contained a fairly standard concurrent user vs throughput graph. The tool was a commercial one that the vendor has experience in. The numbers have been changed to protect the guilty but the general trend looked something like this:

 

 

 

Throughput ramps up with concurrent users and then drops off not long after steady state is reached. The analysis caption provided was generated by the tool, and stated rather blandly that “This graph shows the number of concurrent users versus the transactions per second load”, and the test summary said that execution was successful with 500 concurrent users and that the system should go live (actually, it was meant to be 1000 but not enough licenses were available so the were ‘spun’ twice as ‘hard’ – and no, that’s not valid). If the tool says that 500 concurrent users were involved in the test so it must be right…right? Wrong. The graph is telling us several potential things – not many of them are good because at our steady state we should be seeing a consistent throughput. In this case we might not be hitting our predicted concurrency due to an overly short or potentially overly long session time. Worse and infinitely more likely given the lack of error handling in the scripts, the application under test has failed. The failure might be load related or it could simply be the result of submitting invalid data resulting in business processing errors (i.e. invalid login, user can’t perform the requested action). Either way you cannot and should not make a positive go-live recommendation based on that trend without a considerable amount of digging. Providing a graph with a system generated caption is not analysis despite the claims of the tools vendor or the consultant. Digging down and identifying what the root cause is.

 

What then if anything highlights the difference between the commercial approach and the use of open-source? Jmeter like many open-source tools it was initially written to fulfill a technical need at the expense of providing things like wizards, a comfortable UI, or a massive integrated monolithic environment. Open source tooling can be trickier to set up and configure. The flip side is there is a wealth of information and ‘how to’ guides available on the web which don’t require maintenance and support agreements to get to. And, typically these are far more detailed and useful than those provided by the commercial vendors. While you can conceivably run your entire performance testing engagement without setting foot outside of the HP environment, Jmeter is first and foremost an engine for driving load against the system under test and in my opinion it places different expectations on the engineer. You are going to have to figure out how best to implement your workload, set up your load testing infrastructure, decide what code should be implemented as a compiled Java class and how you are going to gather your application metrics, logs and system metrics. A Commercial tool tends to come with a bundled approach and methodology which is something that is very apparent when or if you sit the LoadRunner certification – hence the philosophy that the tool is the Job. If you ‘know’ LoadRunner you therefore ‘Know’ performance testing. Jmeter provides virtually no assistance in this regard and therefore there is an expectation that while you might not know the tool, you should know what it is that you are wanting to achieve.

 

This brings us full circle back to the point made about analysis capabilities. Saying that the analysis capabilities of a commercial tool are far in excess of Jmeters is like saying that its windy outside because I use mac – nonsensical. Commercial tools might be better at collecting data in some cases than Jmeter but a tool is just a means to an ends. It is your job as the engineer to determine the business processes to be tested, to build the workload model, identify the systems architecture and the required monitoring. It's your job as the performance engineer to pull it all together and analyze the results and identify the performance issues using your knowledge and by working closely with the client’s own technical teams.

 

The tool is not the job.


Bookmark and Share

Return to previous page