I figured that it was sitting in the middle of a fairly large post, and I wanted it to be seen and reviewed by more people than would be bothered to plough through the other stuff.
It's a suggested series of terms by which different types of performance tests can be described:
- Performance Test: Given load X, how fast will the system perform function Y?
- Stress Test: Under what load will the system fail, and in what way will it fail?
- Load Test: Given a certain load, how will the system behave?
- Benchmark Test: Given this simplified / repeatable / measurable test, if I run it many times during the system development, how does the behaviour of the system change?
- Scalability Test: If I change characteristic X, (e.g. Double the server's memory) how does the performance of the system change?
- Reliability Test: Under a particular load, how long will the system stay operational?
- Availability Test: When the system fails, how long will it take for the system to recover automatically?
In addition to the above, there is then another term, which I would suggest is not a type of test in its own right, rather it is a term denoting the depth of analysis being performed.
- Profiling: Performing a deep analysis on the behaviour of the system, such as stack traces, function call counts, etc.
Any thoughts?
No comments:
Post a Comment