Top 10 critical MISTAKES Performance Testers make

Performance Testing is an act of investigating the system performance against 3 key qualities – Speed, Scalability & Stability. A Performance Tester need to have not a only a typical testing mindset but also a holistic mindset to analyze the system performance & identify what is going wrong in the system during performance test. Due to this expectation, often there is confusion in the responsibilities of a Performance Tester versus Performance Engineer.

‘Thinking beyond performance testing tool’ is very important quality for a performance tester. The performance testing tool has to be used as an enabler to simulate multi-user load conditions. The tool will not have the intelligence to certify whether workload is wrong or test environment is far different from production environment, etc.

Hope you have already looked at our online course, ‘Troubleshooting Skills for Performance Testers‘ powered by teachable, the most reliable e-learning platform.

http://elitesouls.teachable.com/p/troubleshooting-skills-for-performance-testers/?

Starting from the basics, it provides a detailed explanation for 12+ bottleneck analysis techniques, designed to provide a systematic & holistic thought process on how to look at the end to end system performance problems at all possible dimensions.

The Top 10 mistakes made by Performance Testers during performance testing

No Questioning Mindset

There is a lack of questioning mindset. The power of ‘WHY’ is not realized & possesses a wrong mindset; performance testing is all about using a sound commercial tool to conduct various types of multi-user tests to report transaction response times. Seem to be very satisfied with the tool specific script development skills & being a tool expert. Often blames others for non availability of full-fledged NFRs, test data, test environment, etc.

Improper Workload

The test scenarios & user load distribution levels are decided based on suggestions from client / business analysts / production sales team, etc, in spite of application being in production where actual end users traffic details exist. The suggestions can be considered as a good start but cannot be the decision maker for finalizing the workload. ‘User access pattern if not found realistic, there is no point in running performance tests for non-realistic workload’ is not realized.

Insufficient Test Data

The test data used in the performance test environment is far less compared to production environment. Performance testers don’t realize the fact that for data intensive applications, the data volume becomes one of the primary factors for deciding the system performance.  Production database dump (with sensitive data being masked) or test data creation tools are not considered during performance testing & system performance is certified for very less data volumes on the test environment.

Usage of low end Test Environment compared to Production capacity

The test environment made available / provided by client is considered as the best configuration to carry out the planned set of tests. No caution is made to identify the capacity ratio of two environments or to spot the differences in the deployment architecture of the two environments. Not even a high level comparison done considering Server Model, # of CPUs, Type of CPU (dual or quad cores), CPU Speed, RAM memory, etc on test versus production environments before scheduling the performance tests. Often performance testers feel satisfied, if client is not ready to share the configuration details, without the need for educating the need & importance of this analysis.

Load Generation taken for granted

Inspite of knowing the application under test is being accessed across geographical locations with time zone differences; blindly the impact on server load due network latency is ignored. Local load generation machines are used, though cloud based load generation options has become very cheap nowadays. And, the load generation machine performance (CPU / Memory / Disk / Network) is not considered important assuming the load generator will always spawn the configured user loads.

Improper / Incomplete Testing

The end users & their access patterns of the application under test is not studied, hence performance test strategy document remains as just a standard template completely reused. Often couple of load tests followed by a stress test (sometimes optional) is planned. The application usage characteristics is not studied to derive the right type of test (with right test duration, ramp up & steady state periods, requests/second, transactions/second, load fluctuations, etc).  There is a misunderstanding in the real objective of the test & need for carrying out the right test, etc leading to incomplete / improper tests.

Ineffective Performance monitoring

The need for performance monitoring is not realized often & sometimes few of the layers are monitored just for the sake of doing monitoring. End to end system layers are not monitored during performance testing as the test objectives & test exit criteria are not clearly set. And sometimes, even if server performance monitoring is done, observations & bottleneck symptoms are not paid attention during performance tests.

Use of Traditional Reactive Performance Test Methodology

With the recent digital revolution & the importance given to user experience, there is much awareness on the impact of system performance. Early Performance testing & proactive performance testing / engineering methodologies are not proposed for business critical, high traffic systems, as nowadays the business stakeholders understand the importance of bringing it early in SDLC. There is no CI/CD integrated performance test methodologies used for systems that are getting developed in pure agile/devops environment. The traditional (reactive) performance test methodology is usually proposed for any type of applications across any software development methodology (iterative / water-fall / partially agile / purely agile, etc).

Analyzing System Performance

This being the key phase in performance testing, the application health metrics (Example, response time, throughout, etc) & server health metrics (Example, resource utilization, queue length, etc) needs to be monitored & correlated to make judgments on the system performance issues. This activity is missed & not considered important by performance testers. The performance bottleneck symptoms or failures are not observed carefully during tests. Even the basic correlation analysis on various related performance metrics trend observed during the tests are not performed. Hence problem analysis, problem isolation & diagnosis are not given importance & stay incomplete / negligible sometimes, making performance testing engagement not of much value to business.

Ineffective Reporting style

Though a variety of test reports are shared with the business stakeholders in various formats, most of the reports just narrate the comprehensive details about all the performance metrics reported by the tool with nice graphs. The symptoms observed during test, test observation, test inferences, quantitative explanations with data points, test conclusion, etc are not included. Without knowing the fact there will be a wider audience (business stakeholders, technical architects, developers, performance engineers, infrastructure specialists, capacity planners, etc) who will be interested on the test report, no proper agenda is used. No standard reporting template with best practices is used.

You can use this as a checklist to ensure the above 10 mistake prone areas are taken care when you plan for your next Performance Testing engagement. As i always feel, “Google & You-tube” has solutions for every complex question in this world. Just the time taken to get the right content might vary.

Are you looking for guidance on how to avoid them, you can await our next online course, “Performance Testing done rightly “ , a tool agnostic course designed to cover the best practices on performance testing (including NFR baselining, user traffic analysis & workload model development, test data management, performance test strategy development, capacity benchmarking, etc ). This course should address how to avoid these top 10 mistakes  made by performance testers.

Happy Performance Testing & Performance Evaluation!!

Recommended Posts
Showing 5 comments
  • Charan
    Reply

    Good one, Agreed. And I don’t know why analysts won’t consider Firewalls, Network topology, Bandwidths while analysing
    I have few cornerstone points as well.

    1. We should analyse considering the firewall
    We generally conduct testing on live servers from our work environment which is secured by firewalls and IPS,IDS where end users don’t even have. How can we get accurate measures?

    2. We should analyse considering the network Bandwidths
    When we conduct tests on QA environment, we should consider Bandwidth. Often we use LAN (which would be utilized by many other employees), it’s our duty to think out of box. We should get separate bandwidth with known data transfer speed to get accuracy

    3. We should analyse considering network topology
    Let’s say the application under AUT hosted in a server where the network topology is BUS. What would be latency if they change it to STAR?

    4. Doing Source code Analysis
    WE should review the source code (store procs), API’s workflow to state mitigations and Root causes

    5. Analyzing network latency – Browser specific
    We should analyze how the application is behaving in various browsers. We also need to check by adding few addons and excluding an addons.

    I maybe wrong, but to my solitary view, I ask myself how we could get accurate measures without considering the above points!!

    • Ramya RM
      Reply

      We discussed in detail 🙂

      • Dany
        Reply

        Your post has lieftd the level of debate

    • Chris
      Reply

      Wow, your post makes mine look feelbe. More power to you!

  • Taron
    Reply

    If you want to get read, this is how you shloud write.

Leave a Comment