101
7109
1966
1222
2020
1444
102
1103
1935
1940
708
M113
1956
1209
102
8102
1987
044
0051
607
1976
1031
1984
1954
1103
415
1045
1864
103
714
1993
0222
052
1968
2450
746
56
47
716
8719
417
602
104
6104
1995
322
90
1931
1701
51
29
218
908
2114
85
3504
105
08
2001
713
079
1940
LV
426
105
10
1206
1979
402
795
106
31
2017
429
65
871
1031
541
656
764
88
001
27
05
glados

"Since you went to all the trouble of waking me up, you must really, really love to test." - GLaDOS

I really did love testing the Aperture Science Handheld Portal Device with GlaDOS circa 2008-2011. Nowadays (circa 2015 - present day), however, I've shifted to my focus from portals1 to protocols2, primarily within that one final frontier of Software applications: the Electronic Health Record.

1Not entirely true: I have tested a couple of patient portals

2and APIs, databases, front ends, back ends, etc

Had you told me that any time prior to 2015, I would have asked "...what precisely does 'testing software' entail? How is that a full time gig?"

In their in-depth and aptly named "The Art of Software Testing, Third Edition," Glenford J. Myers, Tom Badgett, and Corey Sandler define software testing as:

Testing is the process of executing a program with the intent of finding errors

I appreciate the twofold nature they provide, distinguishing between the act and purpose of software testing, but I am particularly partial to how they capture the former: when you test, you are executing a program. Their summation of purpose, however, always gave me the slightest pause. True, finding errors is one of the most, if not the most, important goals in software testing, but to describe it as the only purpose rings false to me. In contrast, Cem Kaner, James Bach, and Bret Pettichord, in "Lessons Learned in Software Testing, A Context-Driven Approach," offer a more expansive take:

Testing is done to find the right information

Kaner et al. offer nothing on the actual act of software testing, but their framing of the purpose is inspired, if not immediately self-evident. They continue to emphasize that the tester's role is "the headlights of the project," and through testing, they uncover basic data essential for making crucial decisions later. Christie Wilson, in "Grokking Continuous Delivery," defines this data as 'the information you're looking for,' which I view as nearly synonymous with 'the right information.' A test that successfully helps you find such information, per Wilson, is a Signal (whereas one that fails to do so is Noise, but more on that later). My working definition of software testing blends all of these:

Testing is the process of sending signals—executing a program with the intent of finding the right information.

"The Art of Software Testing" begins with a self-assessment, asking readers to write test cases for the following program:

The program reads three integer values from an input dialog. The three values represent the lengths of the sides of a triangle. The program displays a message that states whether the triangle is scalene, isosceles, or equilateral.

At its core, a test case consists of two separate yet equally important groups: the input data, which defines your scenarios; and the description of output, which provides criteria for evaluation. Additionally, it's helpful to provide your test scenario—some context for others to better understand your inputs. An abstract example of a test case for the above program could look something like this:

Test Scenario Summary

{a: Side A test data, b: Side B test data, c: Side C test data, expected: 'Anticipated Output'}

The program description is clear in its expectations, but its conciseness could lead to assumptions regarding actual implementation. The text is clear that the program will "read three integer values from an input dialog," but it doesn't specify how that input dialog should be set up. Is it safe to assume that the user will always provide an integer to the input field? How will the program respond if they don't, and how can you account for this when coming up with test cases? How many signals are required to say with confidence that this Triangle program is one of Quality?

Triangle Type Tester




These are my test cases to check that that program can handle various valid configurations of a triangle. While one might have initially expected three cases to be sufficient here, you can add a couple additional test cases to cover the remaining possible permutations of the Isosceles triangle.

Valid Triangles

1. Equilateral Triangle (All sides are equal)

{a: 5, b: 5, c: 5, expected: 'Equilateral'}


2. Scalene Triangle (All sides are different)

{a: 3, b: 4, c: 5, expected: 'Scalene'}


3. Isosceles Triangle (Two sides are equal)

{a: 3, b: 3, c: 4, expected: 'Isosceles'}


{a: 3, b: 4, c: 3, expected: 'Isosceles'}


{a: 4, b: 3, c: 3, expected: 'Isosceles'}


Invalid Inputs

Next, I added cases to check for invalid inputs, such as negative numbers and non-integers.

1. Zero Length

{a: 0, b: 4, c: 5, expected: 'Invalid: Triangle sides must be positive integers.'}

2. Negative Length

{a: -3, b: 4, c: 5, expected: 'Invalid: Triangle sides must be positive integers.'}

3. All zeros

{a: 0, b: 0, c: 0, expected: 'Invalid: Triangle sides must be positive integers.'}

4. Non-Numeric Value

{a: 'a', b: 2, c: 3, expected: 'Invalid: Triangle sides must be positive integers.'}

5. Floating Point Number

{a: 3.5, b: 4, c: 5, expected: 'Invalid: Triangle sides must be positive integers.'}

6. Undefined value

{a: 2, b: 3, c: undefined, expected: 'Invalid: The provided lengths do not form a triangle.'}

Invalid Triangles

Finally, I tested cases where the provided inputs do not form a valid triangle. These include cases where the sum of two sides equals or is less than the third side:

1. Sum of Two Sides Equals the Third

{a: 1, b: 2, c: 3, expected: 'Invalid: The provided lengths do not form a triangle.'}

{a: 1, b: 3, c: 2, expected: 'Invalid: The provided lengths do not form a triangle.'}

{a: 3, b: 1, c: 2, expected: 'Invalid: The provided lengths do not form a triangle.'}

2. Sum of Two Sides Less than the Third

{a: 1, b: 2, c: 4, expected: 'Invalid: The provided lengths do not form a triangle.'}

{a: 1, b: 4, c: 2, expected: 'Invalid: The provided lengths do not form a triangle.'}

{a: 4, b: 1, c: 2, expected: 'Invalid: The provided lengths do not form a triangle.'}

We now have 17 tests, 17 signals, that we can use to gather all the information we need to say with a high degree of certainty that the program is working as intended.