[Q] Specify checks for failing and passing tests in the same llvm-lit source

Assume that I have a test source that passes with one set of command-line options and fails with an expected error message with another set of command-line options. Is it possible to specify checks for both the tests in the same test source for llvm-lit?

For example, consider
; RUN: my_command -passing-option %s | FileCheck %s -check-prefix=TESTPASS
; RUN: my_command -failing-option %s | FileCheck %s -check-prefix=TESTFAIL

; TESTPASS: Passed
; TESTFAIL : Failed
// Test source

Although my_command does print a failure message “Failed” the exit value is not 0 i.e., does not indicate success.

Are there any examples of such tests in the LLVM repo?

Thanks for any suggestions.

You can just add « not » in front of your command to indicate the expectation of a non-zero return.

Thanks. However, if I have two (or more) tests and

(a) one of them expectedly passes with -failing-option and the other expectedly fails,
(b) both of them pass with -passing-option

would separating the tests into different files be the only option?

I am hoping to avoid making two copies of the same tests.

A worthy goal. I’ve had to do that, but it’s unusual. If you do get into that state, there are tactics for sharing source; the most common is to put the shared source in the Inputs subdirectory and have the main test read that source instead of itself. Ideally, though, you could do something like this.

; RUN: my_passing_command -passing-option %s | FileCheck %s -check-prefix=TESTPASS
; RUN: my_failing_command -passing-option %s | FileCheck %s -check-prefix=TESTPASS
; RUN: my_passing_command -failing-option %s | FileCheck %s -check-prefix=TESTPASS
; RUN: my_failing_command -failing-option %s | FileCheck %s -check-prefix=TESTFAIL

; TESTPASS: Passed
; TESTFAIL : Failed
// Test source

Thank you very much for the suggestions.

The testing situation I want to handle without replicating test sources - a bit more in detail - is as follows: (apologies for not providing sufficient detail upfront)

; RUN: my_command -option1 %s | FileCheck %s -check-prefix=TESTOPTION1
; RUN: my_command -option2 %s | FileCheck %s -check-prefix=TESTOPTION2

// Test source - FUNC_ABC
// Test Source - FUNC_DEF

The llvm-lit should report that the test has Passed if:

1a) Executing
my_command -option1 FUNC_ABC succeeds i.e., exit value is 0 and FileCheck succeeds as the output text matches.
1b) Executing
my_command -option1 FUNC_DEF succeeds i.e., exit value is 0 and FileCheck succeeds as the output text matches.

2a) Executing
my_command -option2 FUNC_ABC succeeds i.e., exit value is 0 and FileCheck succeeds as the output text matches.
2b) Executing
my_command -option2 FUNC_DEF fails (no crash) i.e., exit value is non-zero but FileCheck succeeds as the output text matches the error text generated is expected.

As mehdi_amini suggested I could use “not” to indicate the expected non-zero exit value returned in 2b - but that isn’t what is expected when the same command is used to test FUNC_ABC (i.e., in 2a) which returns an exit value of 0.

I haven’t yet figured out a way to represent such an intent while avoiding source duplication - even if I use shared sources. It would be great to upscale this to be able to test the for the validity - of say, compiler passes that either succeed or fail on the same set of input sources ( >2 in number) - depending on options passed in - say subtarget or some such.

Thanks again for any additional help!

If you have one test source with both FUNC_ABC and FUNC_DEF in it, then it’s hard to imagine how you would distinguish the cases 2a and 2b (as they are reading the same file).

As I mentioned before, you could have separate files in Inputs/FUNC_ABC and Inputs/FUNC_DEF, then have your test file in the parent directory looking like this:

; RUN: my_command -option1 %S/Inputs/FUNC_ABC | FileCheck %s -check-prefix=TESTOPTION1
; RUN: my_command -option2 %S/Inputs/FUNC_ABC | FileCheck %s -check-prefix=TESTOPTION2
; RUN: my_command -option1 %S/Inputs/FUNC_DEF | FileCheck %s -check-prefix=TESTOPTION1
; RUN: not my_command -option2 %S/Inputs/FUNC_DEF | FileCheck %s -check-prefix=TESTOPTION2

Thanks again for your reply. Partitioning test functions with differing acceptable passing criteria for different execution instantiations seems logical. I guess I was being unrealistic to have all of the sources in the same file for my scenario.

Regards.