Showing posts with label java. Show all posts
Showing posts with label java. Show all posts

Monday, February 8, 2021

SLF4j Logging performance: lazy argument evaluation

Sometimes we need to log a dynamically generated expressions that are very expensive to compute. For instance I had to log an object to yaml format only when debug was enabled. Serializing an object to yaml is an expensive operation especially when you need to scale up to thousands of calls per second.

(As reference, I am using java 8 with slf4j-1.7.25)

If I had directly used

 log.debug("The message is:{}", toYamlString(myObject));

then the message generating method would be called every time even if debug as disabled on the logger. This is because of the java argument evaluation mechanism.

The obvious choice here is:

  log.debug("The message is:{}", toYamlString(myObject));

but, apart of adding unpleasing code on top of your method, this is also doubling the call on if(log.isDebugEnabled()) that is also performed in the logging framework itself.So I took some time to see if it could be done in a better way.

At some point I found this post that was nicely solving this. I liked it and wrote my code accordingly. Then I realised it could be even simpler!

So I simplified it to only this:

  private static Object lazyString(final Supplier<?> stringSupplier) {
    return new Object() {
      public String toString() {
        return String.valueOf(stringSupplier.get());

Then in my logging call:

  log.debug("The message is:{}",lazyString(() -> toYamlString(myObject)));

or, if your method take no arguments, you can use method reference:

  log.debug("The message is:{}",lazyString(this::toYamlString));

That's it! Simple and elegant.

The good news is that more and more logging frameworks added or are adding native support for deferred evaluation of arguments.

Until then we can use simple nice workarounds like this.

Have a nice day,


Tuesday, April 3, 2018

Compile Maven project and tests with different compilers and with different unit and integration test directories

My project is java, and I wanted to give my team the possibility to use java/junit and groovy/spock for our tests.

Moreover I wanted to keep unit tests separated from integration tests and if possible with different compilation life cycles so that the flow is:

1. compile the project code from src/main/java using the default compiler
2. compile and run the unit tests using the mixed java-groovy eclipse compiler
3. compile and run the integration tests using the mixed java-groovy eclipse compiler

This way the production code is compiled natively while we can play with java-groovy mixed classes in unit and integration tests.

After digging a lot and trying many unsuccessful approaches I got it working exactly as I wished.
Here is the pom:

As you may notice, I have left the unit tests in the standard maven path ie. src/test/java, but, if I want to further move them to src/test/unit/java then I need to configure both the compiler section and the surefire-plugin section in the same way I did for the integration tests.

Basically the secret in in instructing the compiler on:
- when to run (we configure this aspect within an execution section)
- where to compile sources from - within the element compilesourceroots
- where to output classes
- what classes (by name or pattern) to include - in the element outputDirectory
and at the same time to instruct the test runner (surefire or failsafe) on:
- where the test sources are located - within testSourceDirectory
- when the test classes are located - within testClassesDirectory

One essential thing to notice is the id of each execution element (in our case default-testCompile and integration-testCompile, because maven identifies each instance by it's id s it must be uniquely named.

Another thing that many don't know is that the ids can be overridden and indeed I have used the default maven compiler id for the unit test compilation so that only the eclipse compiler shall be run instead of runing also the default compiler. you can change the id and test yourself.

 Hope that shall help you too!

Cheers, Dikran

Tuesday, December 1, 2015

Automated Testing Aid: Manually Running a Quartz Job

I work in a project where testing is a first class citizen. We do unit tests, security tests, integration tests, end-to-ends api tests (SBE), and end-to-end functional (interface based) tests also by using SBE.
All right, everything's fine, until I need to throw in some asserts at the end of my integration/sbe test where I should check if the whole process performed well. Ok, only that this part of the process is accomplished by quartz jobs that run asynchronously in their specific setup, beyond our control.

Note: The assumption for this post is that your Quartz scheduler can be run within the same application with your tested classes. For distributed quartz jobs there is another story.

Some could say, ok, by you have the possibility to get a job by it's name and call triggerJob(JobKey) on a quartz scheduler instance, that should trigger the job immediately. But, be careful is about triggering a job, and not running the job. That means that:

  • the command is asynchronous and return immediately;
  • the job could actually start later, depending on the schedule's config and
  • you don't actually know when the job shall finish so that you can test your assumptions about the outcome of it.

Two quick solutions:
  1. after test's execution finished, before asserting on data, sleep the test thread for a while to give quartz time to do it's work. But sleep for how long? Some manual tries could give us some empirical idea of how long should we wait before the jos is usually executed, but we are never going to be 100% sure it actually was. And then, this approach could bring the execution of our test suite to last forever, imagine running hundreds of tests of this type that each are sleeping for few seconds... It doesn't sound very appealing.
  2. add a JobListener listener to the scheduler, then trigger the job and then put your main thread in wait until the listener is triggered on execution finished, and notifies your main thread so it can resume it's testing task. But, again, there might be many jobs already triggered and running until ours get's it's change to run. And after all, would you really want to get into unexpected threading issues? I think not.

So, after trying the aforementioned approaches and not really being happy with them I thought, why not directly run the jobs I am directly interested in?
Well this is not that trivial, because I'd like to run the jobs as they are, without having to know what other stuff is injected in each of my classes extending QuartzJob in order to make it work. So, after some research and study of how quartz works in collaboration with spring, that is what came out:

import org.quartz.Job;
import org.quartz.JobExecutionContext;
import org.quartz.Scheduler;
import org.springframework.beans.BeanWrapper;
import org.springframework.beans.BeansException;
import org.springframework.beans.MutablePropertyValues;
import org.springframework.beans.PropertyAccessorFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;

import java.lang.reflect.Method;
import java.util.Map;

public class ManualJobExecutor implements ApplicationContextAware {

    private ApplicationContext applicationContext;

    public void executeJob(final Class<Job> jobClass) {

        try {
            //create job instance
            final Job quartzJob = jobClass.newInstance();
            // For the created job instance, search all services that are injected by quartz.
            // Those service instances are kept inside each scheduler context as a map
            final BeanWrapper beanWrapper = PropertyAccessorFactory.forBeanPropertyAccess(quartzJob);
            final MutablePropertyValues propertyValues = new MutablePropertyValues();
            //get all schedulers defined across all spring configurations for this application
            final Map<String, Scheduler> schedulers = applicationContext.getBeansOfType(Scheduler.class);
            for (final Scheduler scheduler : schedulers.values()) {
                // Populate the possible properties with service instances found
            //set the properties of the job (injected dependencies) with the matching services
            //the other services in the list that have no matching properties shall be ignored 
            beanWrapper.setPropertyValues(propertyValues, true);

            //get method executeInternal(JobExecutionContext) from job class extending QuartzJobBean 
            final Method executeJobMethod = quartzJob.getClass().getDeclaredMethod("executeInternal", (JobExecutionContext.class));
            //call the processItems method on the Job class instance
        } catch (final Exception e) {
            throw new RuntimeException(String.format("Exception while retrieving and executing job for name=%s", jobClass.getName()), e);

    public void setApplicationContext(final ApplicationContext applicationContext) throws BeansException {
        this.applicationContext = applicationContext;

That's it!
Of course there are also other aspects, i.e checking if other job of the same class is already executing so that it won't overlap with your execution. Usually in Quarz, @DisableConcurrentExecution takes care of this but here you need to check it yourself.
You could also make your method accept a job by its name instead of class so you can get the names from your database instead of looking into project classes.

I hope this is going to ease your testing.
Please share your thoughts.

Have a nice day,