[scala-tools] Common Test Runner for JVM

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[scala-tools] Common Test Runner for JVM

Esko Luontola
I'll soon start writing a new test runner - a common test runner for the
JVM, codename CTR4J. It will provide a superset of JUnit's test runner's
features, while at the same time overcoming some of JUnit's limitations
and helping integration with development tools. I would like to hear
your opinions on this project.


        MOTIVATION

JUnit's test runner - especially the org.junit.runner.RunWith annotation
- has been very successful in providing a common interface for running
tests from multiple testing frameworks. If a testing framework provides
a org.junit.runner.Runner implementation, every Java build tool and IDE
will automatically support the framework.

However with the rise of Scala testing frameworks (Specs, ScalaTest,
Specsy - the last one being written by me), some limitations of JUnit's
test runner are coming to the fore:

- JUnit makes an implicit assumption that all tests are known before any
test code is executed. For example IntelliJ IDEA calls
Runner.getDescription() before calling Runner.run() and has problems if
the descriptions change during test execution. But in the Scala testing
frameworks, in order to achieve a concise syntax, test declarations are
implemented as method calls in the class constructor, which means that
at least some of the test code must be executed before it is known that
what tests there are.

- JUnit doesn't know the concept of nested tests. It knows the concept
of suites containing suites, but not "test methods" which contain other
"test methods". But at least Specs and Specsy allow organizing tests
into unlimitedly nested closures. For them this limitation means that
IDEs are not able to show the right mental model of what is really
happening during test execution, which makes understanding test output
harder.

- There is an implicit assumption that test names are the same as method
names, and that the tests are implemented as methods. For example in
IntelliJ IDEA it's possible to navigate from the test results into test
methods. But with the Scala frameworks this is not possible, because the
test names to not correspond any method declaration (and a string search
is not 100% reliable because the test name could be generated
dynamically or there could be multiple tests with the same name; at
least Specsy allows duplicate names).

There are also some other limitations which affect all Java testing
frameworks:

- When a test prints something, IntelliJ IDEA collects what is printed
to stdout and stderr, so that when you select a test in the test
results, it will show only what that test printed. This is very useful
for debugging with println-statements. But because there is no
synchronization between the test runner and the IDE which reads the
stdout/err, IDEA doesn't always associate what a test printed with the
correct test, especially when the tests are very fast.

- No built-in support for executing the tests in parallel. Third-party
tools are needed for it.

Possibly because of being faced with the above limitations, at least
Specs, ScalaTest and TestNG have implemented their own test runners.
This in turn means that the IDEs need to support each test runner
individually, which results in varying levels of quality and features
between the integration with different test runners. For example, at
least some time ago, IDEA's Scala plugin did not support automatically
finding all Specs tests in the project and executing them.

There is also code duplication inside and between all IDEs, build tools
and CI servers because they have had to write code to keep track of the
test execution state, and that needs to be repeated for each test runner.


        THE PROJECT

I'll soon start writing a test runner to solve the above mentioned
limitations and to ease the integration with build tools, IDEs, CI
servers etc. The license will be Apache License 2.0. I'm also thinking
of making this not only open source, but also "open development", by
screencasting similar to http://jamesshore.com/Blog/Lets-Play/

What the test runner will provide to testing frameworks, is a superset
of the features provided by these JUnit classes: RunWith, Runner,
Description, RunNotifier. The API will be a bit more generic and with
new abstractions, in order to support for example nested tests.

What the test runner will provide to IDEs, build tools and CI servers,
is a library for launching the tests in a new VM instance, monitoring
the test execution status, collecting the test results and what was
printed to stdout/err etc. The tools just need to configure the
classpath and choose which tests to run (e.g. using file name patters).
I will also provide a reference implementation of an UI for running the
tests, because running tests as part of the TDD cycle is too important a
feature for its usability to be left in the hands of programmers without
interaction design skills.

What the test runner will provide to users, is running tests from all
testing frameworks (which support this runner or JUnit's test runner) in
one suite, running them in parallel on multiple CPU cores (I'll tackle
parallelization over multiple machines as a separate project, probably
as a commercial tool), integration with every development tool on the
JVM (I intend to contact all tool vendors and gather requirements from
them to ease the integration) and reliability (backwards compatiblity is
very important to me, and I plan on making it possible for each tool
vendor to write integration tests, which will be run as part of the test
runner's development builds, to detect any breaking changes).

So, I would like to ask for your opinion on this project. Here are some
questions:

- Do you find this useful?

- What do you think would be a good name for the project? One suggestion
is CTR4J (actually named similar to SLF4J) which is googleable, but
perhaps not very pronounceable or memorable.

- What would be a good name for the annotation which corresponds
@RunWith? Its name needs to be different to be googleable and to make it
easy to annotate a class with both JUnit's and this test runner's
annotation.

- Any other thoughts?

--
Esko Luontola
www.orfjackal.net
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [scala-tools] Common Test Runner for JVM

David Bernard-3
On 12/09/2010 14:14, Esko Luontola wrote:

> I'll soon start writing a new test runner - a common test runner for the JVM, codename CTR4J. It will provide a superset of JUnit's test runner's features, while at the same time overcoming some of
> JUnit's limitations and helping integration with development tools. I would like to hear your opinions on this project.
>
>
> MOTIVATION
>
> JUnit's test runner - especially the org.junit.runner.RunWith annotation - has been very successful in providing a common interface for running tests from multiple testing frameworks. If a testing
> framework provides a org.junit.runner.Runner implementation, every Java build tool and IDE will automatically support the framework.
>
> However with the rise of Scala testing frameworks (Specs, ScalaTest, Specsy - the last one being written by me), some limitations of JUnit's test runner are coming to the fore:
>
> - JUnit makes an implicit assumption that all tests are known before any test code is executed. For example IntelliJ IDEA calls Runner.getDescription() before calling Runner.run() and has problems if
> the descriptions change during test execution. But in the Scala testing frameworks, in order to achieve a concise syntax, test declarations are implemented as method calls in the class constructor,
> which means that at least some of the test code must be executed before it is known that what tests there are.
>
> - JUnit doesn't know the concept of nested tests. It knows the concept of suites containing suites, but not "test methods" which contain other "test methods". But at least Specs and Specsy allow
> organizing tests into unlimitedly nested closures. For them this limitation means that IDEs are not able to show the right mental model of what is really happening during test execution, which makes
> understanding test output harder.
>
> - There is an implicit assumption that test names are the same as method names, and that the tests are implemented as methods. For example in IntelliJ IDEA it's possible to navigate from the test
> results into test methods. But with the Scala frameworks this is not possible, because the test names to not correspond any method declaration (and a string search is not 100% reliable because the
> test name could be generated dynamically or there could be multiple tests with the same name; at least Specsy allows duplicate names).
>
> There are also some other limitations which affect all Java testing frameworks:
>
> - When a test prints something, IntelliJ IDEA collects what is printed to stdout and stderr, so that when you select a test in the test results, it will show only what that test printed. This is very
> useful for debugging with println-statements. But because there is no synchronization between the test runner and the IDE which reads the stdout/err, IDEA doesn't always associate what a test printed
> with the correct test, especially when the tests are very fast.
>
> - No built-in support for executing the tests in parallel. Third-party tools are needed for it.
>
> Possibly because of being faced with the above limitations, at least Specs, ScalaTest and TestNG have implemented their own test runners. This in turn means that the IDEs need to support each test
> runner individually, which results in varying levels of quality and features between the integration with different test runners. For example, at least some time ago, IDEA's Scala plugin did not
> support automatically finding all Specs tests in the project and executing them.
>
> There is also code duplication inside and between all IDEs, build tools and CI servers because they have had to write code to keep track of the test execution state, and that needs to be repeated for
> each test runner.
>
>
> THE PROJECT
>
> I'll soon start writing a test runner to solve the above mentioned limitations and to ease the integration with build tools, IDEs, CI servers etc. The license will be Apache License 2.0. I'm also
> thinking of making this not only open source, but also "open development", by screencasting similar to http://jamesshore.com/Blog/Lets-Play/
>
> What the test runner will provide to testing frameworks, is a superset of the features provided by these JUnit classes: RunWith, Runner, Description, RunNotifier. The API will be a bit more generic
> and with new abstractions, in order to support for example nested tests.
>
> What the test runner will provide to IDEs, build tools and CI servers, is a library for launching the tests in a new VM instance, monitoring the test execution status, collecting the test results and
> what was printed to stdout/err etc. The tools just need to configure the classpath and choose which tests to run (e.g. using file name patters). I will also provide a reference implementation of an UI
> for running the tests, because running tests as part of the TDD cycle is too important a feature for its usability to be left in the hands of programmers without interaction design skills.
>
> What the test runner will provide to users, is running tests from all testing frameworks (which support this runner or JUnit's test runner) in one suite, running them in parallel on multiple CPU cores
> (I'll tackle parallelization over multiple machines as a separate project, probably as a commercial tool), integration with every development tool on the JVM (I intend to contact all tool vendors and
> gather requirements from them to ease the integration) and reliability (backwards compatiblity is very important to me, and I plan on making it possible for each tool vendor to write integration
> tests, which will be run as part of the test runner's development builds, to detect any breaking changes).
>
> So, I would like to ask for your opinion on this project. Here are some questions:
>
> - Do you find this useful?

Yes,
May be overlap some goals of http://github.com/harrah/test-interface : allowing test-front end (shell builder, IDE, CI) to use a common interface to run/communicate with backend (test framework)
=> May be possible collaboration

Ambitious, surefire (a similar tentative for maven) +/- failed. Because it's hard for a front-end + middleware to provide access to the full uptodate feature of a backend.

> - What do you think would be a good name for the project? One suggestion is CTR4J (actually named similar to SLF4J) which is googleable, but perhaps not very pronounceable or memorable.

No idea

> - What would be a good name for the annotation which corresponds @RunWith? Its name needs to be different to be googleable and to make it easy to annotate a class with both JUnit's and this test
> runner's annotation.

No idea, I don't use annotation for my Scala Test (expect when working with java framework JUnit/TestNG).
Why to you need this annotation or the test to say "I run with", IMO the test run with the backend and the CTR4J + backend-plugin do the rest.

> - Any other thoughts?

Very long message, I didn't follow links (Let's play)
May be I could help to create a maven plugin (integration ore replacement of surefire ??)

/davidB

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [scala-tools] Common Test Runner for JVM

Esko Luontola
David Bernard wrote on 12.9.2010 16:11:
> May be overlap some goals of http://github.com/harrah/test-interface :
> allowing test-front end (shell builder, IDE, CI) to use a common
> interface to run/communicate with backend (test framework)
> => May be possible collaboration
>
> Ambitious, surefire (a similar tentative for maven) +/- failed. Because
> it's hard for a front-end + middleware to provide access to the full
> uptodate feature of a backend.

Thanks for reminding about test-interface and surefire. I'll add them to
my list of communities to contact. Their experience on interfacing with
testing frameworks will be valuable.

I have perceived the least common denominator of testing frameworks (on
JVM) to be that the tests are organized into classes (as is everything
in Java) and the tests are visualized by IDEs and other tools as a tree
structure. So my first draft (which I've implemented already once in
Specsy) for the interfaces from the testing framework's point of view
looks something like this: http://gist.github.com/576245


>> - What would be a good name for the annotation which corresponds
>> @RunWith? Its name needs to be different to be googleable and to make
>> it easy to annotate a class with both JUnit's and this test
>> runner's annotation.
>
> No idea, I don't use annotation for my Scala Test (expect when working
> with java framework JUnit/TestNG).
> Why to you need this annotation or the test to say "I run with", IMO the
> test run with the backend and the CTR4J + backend-plugin do the rest.

I decided to go for a similar approach as JUnit 4, because that has
proved to be flexible and reusable - there are tens of testing
frameworks which rely on JUnit's test runner.

When the test runner searches the class files for tests to execute,
there needs to be some way for the class file to declare that it is in
fact a test class and with which testing framework it needs to be
executed. The @RunWith annotation has been useful for that - it marks
the class as a test and tells that which class knowns how to execute it.


> May be I could help to create a maven plugin (integration ore
> replacement of surefire ??)

Thanks, that would be very useful. I don't have experience on writing
Maven plugins and Maven integration is very important. Surefire
integration sounds good - it already supports JUnit and TestNG, so
supporting a third framework should be simple (provided that Surefire
follows the Open-Closed Principle).

--
Esko Luontola
www.orfjackal.net
Loading...