Wednesday, March 30, 2016

Updating all branches from all local git projects in one shot


There are many times when I need to update at once more than just one git project.
I usually structure my projects under a common directory like /Users/Dikran/workspace/projects.
When I update I cd to the respective project and git pull.

I justs happens that recently I needed two things:
1) check updated code of more than one project.
2) check changes made on other branches than the current one.

As you know, git pull is updating only the current branch in a project. Moreover it has the following limitations (quoting from git-up project):

"It merges upstream changes by default, when it's really more polite to rebase over them, unless your collaborators enjoy a commit graph that looks like bedhead.
It only updates the branch you're currently on, which means git push will shout at you for being behind on branches you don't particularly care about right now."


So in order to solve those needs at once, there is a simple solution enabled by a simple script and a great git extension called git-up. This is a very convenient tool that does many nice things in completion to what git already offers. Check the site for docs and info.

The steps:

1. Install git-up extension
For Ruby (the original):
gem install git-up
Or for the Python port:
pip install git-up

2. Create a script (i.e. updateAll.sh) in the root directory of your git subprojects, containing the following:
#!/bin/bash

set -x

for project in */
  do git -C $project up &
done

wait
The script cycles through all subdirectories in the current directory, and issues background calls to git-up on every discovered directory.

The wait is added at the end so that the script shall exit only after all directory updating commands finished.

For a variation you can filter directory names if for instance you want updated only specific directories within a certain project. So for instance if you have your main project called myshop and the composing modules are myshop-frontend/ myshop-backend/ myshop-tests/ then just change the
for project in */
with
for project in myshop*/
Thats't all. Simple, isn't it?

A warning note though. Although git-up suits most cases, please check the documentation first, to be sure it won't mess things in your specific project's commit conventions.

Have a nice day,
Dikran

Tuesday, December 1, 2015

Automated Testing Aid: Manually Running a Quartz Job

I work in a project where testing is a first class citizen. We do unit tests, security tests, integration tests, end-to-ends api tests (SBE), and end-to-end functional (interface based) tests also by using SBE.
All right, everything's fine, until I need to throw in some asserts at the end of my integration/sbe test where I should check if the whole process performed well. Ok, only that this part of the process is accomplished by quartz jobs that run asynchronously in their specific setup, beyond our control.

Note: The assumption for this post is that your Quartz scheduler can be run within the same application with your tested classes. For distributed quartz jobs there is another story.

Some could say, ok, by you have the possibility to get a job by it's name and call triggerJob(JobKey) on a quartz scheduler instance, that should trigger the job immediately. But, be careful is about triggering a job, and not running the job. That means that:

  • the command is asynchronous and return immediately;
  • the job could actually start later, depending on the schedule's config and
  • you don't actually know when the job shall finish so that you can test your assumptions about the outcome of it.

Two quick solutions:
  1. after test's execution finished, before asserting on data, sleep the test thread for a while to give quartz time to do it's work. But sleep for how long? Some manual tries could give us some empirical idea of how long should we wait before the jos is usually executed, but we are never going to be 100% sure it actually was. And then, this approach could bring the execution of our test suite to last forever, imagine running hundreds of tests of this type that each are sleeping for few seconds... It doesn't sound very appealing.
  2. add a JobListener listener to the scheduler, then trigger the job and then put your main thread in wait until the listener is triggered on execution finished, and notifies your main thread so it can resume it's testing task. But, again, there might be many jobs already triggered and running until ours get's it's change to run. And after all, would you really want to get into unexpected threading issues? I think not.

So, after trying the aforementioned approaches and not really being happy with them I thought, why not directly run the jobs I am directly interested in?
Well this is not that trivial, because I'd like to run the jobs as they are, without having to know what other stuff is injected in each of my classes extending QuartzJob in order to make it work. So, after some research and study of how quartz works in collaboration with spring, that is what came out:


import org.quartz.Job;
import org.quartz.JobExecutionContext;
import org.quartz.Scheduler;
import org.springframework.beans.BeanWrapper;
import org.springframework.beans.BeansException;
import org.springframework.beans.MutablePropertyValues;
import org.springframework.beans.PropertyAccessorFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;

import java.lang.reflect.Method;
import java.util.Map;

public class ManualJobExecutor implements ApplicationContextAware {

    private ApplicationContext applicationContext;

    public void executeJob(final Class<Job> jobClass) {

        try {
            //create job instance
            final Job quartzJob = jobClass.newInstance();
            // For the created job instance, search all services that are injected by quartz.
            // Those service instances are kept inside each scheduler context as a map
            final BeanWrapper beanWrapper = PropertyAccessorFactory.forBeanPropertyAccess(quartzJob);
            final MutablePropertyValues propertyValues = new MutablePropertyValues();
            //get all schedulers defined across all spring configurations for this application
            final Map<String, Scheduler> schedulers = applicationContext.getBeansOfType(Scheduler.class);
            for (final Scheduler scheduler : schedulers.values()) {
                // Populate the possible properties with service instances found
                propertyValues.addPropertyValues(scheduler.getContext());
            }
            //set the properties of the job (injected dependencies) with the matching services
            //the other services in the list that have no matching properties shall be ignored 
            beanWrapper.setPropertyValues(propertyValues, true);

            //get method executeInternal(JobExecutionContext) from job class extending QuartzJobBean 
            final Method executeJobMethod = quartzJob.getClass().getDeclaredMethod("executeInternal", (JobExecutionContext.class));
            executeJobMethod.setAccessible(true);
            //call the processItems method on the Job class instance
            executeJobMethod.invoke(quartzJob);
        } catch (final Exception e) {
            throw new RuntimeException(String.format("Exception while retrieving and executing job for name=%s", jobClass.getName()), e);
        }
    }

    @Override
    public void setApplicationContext(final ApplicationContext applicationContext) throws BeansException {
        this.applicationContext = applicationContext;
    }
}

That's it!
Of course there are also other aspects, i.e checking if other job of the same class is already executing so that it won't overlap with your execution. Usually in Quarz, @DisableConcurrentExecution takes care of this but here you need to check it yourself.
You could also make your method accept a job by its name instead of class so you can get the names from your database instead of looking into project classes.

I hope this is going to ease your testing.
Please share your thoughts.


Have a nice day,
Dikran

Wednesday, May 27, 2015

Spring Boot & Jasypt easy: Keep your sensitive properties encrypted


Goal


I want to store my database password encrypted in the application properties file and provide the property encryption password at runtime as java system property or environment variable.

Context:


Java 7, Spring Boot 1.2.3.RELEASE
Currently Spring Boot does not offer native property encryption support.

Solution


Use jasypt encryption library and integrate it into Spring Boot's configuration flow.

How?
Here is a quick and dirty example:

1. Download jasypt and unzip the contents in a folder;
2. Choose a password for encrypting your sensitive properties; for the purpose of this example we choose "my-encryption-password";
3. Choose the property you want encrypted; here we choose to encrypt the database password "my-database-password";
4. Encrypt the database password ("my-database-password") using jasypt and the encryption password ("my-encryption-password"); go into the jasypt bin folder and run:

$ encrypt.sh  input=my-database-password password=my-encryption-password

----ENVIRONMENT-----------------

Runtime: Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 24.60-b09

----ARGUMENTS-------------------

input: my-database-password

password: my-encryption-password

----OUTPUT----------------------

TJ1vA+DLWFrwEmbZKmGmawEonbJw4DxhkFf53JzKfvY=

The output is the encrypted password.
To configure the database in the SpringBoot's application.properties we add:

#for this example we use H2 database
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.url=jdbc:h2:mem:my-schema
spring.datasource.username=test-user

#here we provide the database encrypted password by enclosing in ENC()
#so that jasypt can detect and decrypt it
spring.datasource.password=ENC(TJ1vA+DLWFrwEmbZKmGmawEonbJw4DxhkFf53JzKfvY=)


Integrating Spring Boot and Jasypt


In order to instruct Spring Boot to transparently interpret our property file and extract and decrypt the encrypted properties we need to:

1. Create a PropertySourceLoader implementation that knows how to parse property files, identify encrypted properties and decrypt them before making them available to other components. Also the class knows to get the encryption password from system properties (provided at command line by -Dproperty.encryption.password=my-encryption-password) or as an environment variable in the operating system (export PROPERTY_ENCRYPTION_PASSWORD="my-encryption-password"). Listing follows:
package com.myexample;

import org.jasypt.encryption.pbe.StandardPBEStringEncryptor;
import org.jasypt.spring31.properties.EncryptablePropertiesPropertySource;
import org.springframework.boot.env.PropertySourceLoader;
import org.springframework.core.PriorityOrdered;
import org.springframework.core.env.PropertySource;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PropertiesLoaderUtils;

import java.io.IOException;
import java.util.Properties;


/**
 * This class is a replacement for the default Spring PropertySourceLoader. It has the capability of detecting
 * and decrypting encrypted properties via Jasypt Encryption Library.
 * The decryption password must be provided via an environment variable or via a System property. The name of the property can be {@code PROPERTY_ENCRYPTION_PASSWORD} or {@code property.encryption.password}.
 * For more information see http://www.jasypt.org/ and http://www.jasypt.org/spring31.html
 * For Spring Boot integration the default {@link PropertySourceLoader} configuration was overriden by
 * META-INF/spring.factories file.
 *
 * @see org.springframework.boot.env.PropertySourceLoader
 */

public class EncryptedPropertySourceLoader implements PropertySourceLoader, PriorityOrdered {

    private static final String ENCRYPTION_PASSWORD_ENVIRONMENT_VAR_NAME_UNDERSCORE = "PROPERTY_ENCRYPTION_PASSWORD";
    private static final String ENCRYPTION_PASSWORD_ENVIRONMENT_VAR_NAME_DOT = "property.encryption.password";
    private static final String ENCRYPTION_PASSWORD_NOT_SET = "ENCRYPTION_PASSWORD_NOT_SET";

    private final StandardPBEStringEncryptor encryptor = new StandardPBEStringEncryptor();

    public EncryptedPropertySourceLoader() {
        this.encryptor.setPassword(getPasswordFromEnvAndSystemProperties());
    }

    private String getPasswordFromEnvAndSystemProperties() {
        String password = System.getenv(ENCRYPTION_PASSWORD_ENVIRONMENT_VAR_NAME_UNDERSCORE);
        if (password == null) {
            password = System.getenv(ENCRYPTION_PASSWORD_ENVIRONMENT_VAR_NAME_DOT);
            if (password == null) {
                password = System.getProperty(ENCRYPTION_PASSWORD_ENVIRONMENT_VAR_NAME_UNDERSCORE);
                if (password == null) {
                    password = System.getProperty(ENCRYPTION_PASSWORD_ENVIRONMENT_VAR_NAME_DOT);
                    if (password == null) {
                        password = ENCRYPTION_PASSWORD_NOT_SET;
                    }
                }
            }
        }
        return password;
    }

    @Override
    public String[] getFileExtensions() {
        return new String[]{"properties"};
    }

    @Override
    public PropertySource load(final String name, final Resource resource, final String profile) throws
            IOException {
        if (profile == null) {
            //load the properties
            final Properties props = PropertiesLoaderUtils.loadProperties(resource);

            if (!props.isEmpty()) {
                //create the encryptable properties property source
                return new EncryptablePropertiesPropertySource(name, props, this.encryptor);
            }
        }

        return null;
    }

    @Override
    public int getOrder() {
        return HIGHEST_PRECEDENCE;
    }
}

2. Create a com/myexample/META_INF/spring.factories file to override the default PropertyResurceLoader (org.springframework.boot.env.PropertiesPropertySourceLoader) which is provided with the Spring Boot distribution in META-INF/spring.factories. Our file should contain one line as follows:
org.springframework.boot.env.PropertySourceLoader=com.myexample.EncryptedPropertySourceLoader

That's it! Now your application should be able to use encrypted properties.

Thanks for reading!
Dikran

To give the right credits, info that helped me solving the problem and writing this post were gathered from this Stackoverflow post.


Friday, October 5, 2012

Scrum and Story Points, what's the story?

After working with scrum for a while and watching this debate of time vs story points I came to a personal conclusion that helped me to make better estimations and use the story points at their best value.

  In my opinion story points best measure risk. You estimate this risk taking into account your proficiency, overall experience and the expertise in the technology, the project and it's business, the dependencies involved that you need to rely on to move forward (could be external systems/teams, business analysts, other people availability) and your average capacity of solving problems in a given time.

  So when it comes to estimating a user story you should ask yourself: “what is the risk of this story?” I would categorize the risk vs story points as follows:

1 point - Virtually no risk, insignificant work doable in a very short time

3 points - Extremely low risk, know all about it can do it quickly, probably a matter of 1-2 hours

5 points - Low risk, know most about what I need to do, probably can fit in few hours to one working day

8 points - Medium risk, know quite well about what I need to do, I might have some unexpected obstacles and maybe some dependencies on other (external) resources, but I am confident I can do it in 1-3 days.

13 points - High risk, there are aspects about I have no idea on how to tackle, have external dependencies about that I am worried, it might take half of a sprint to get it done.

21 points - Highest risk, I have right now no knowledge about the subject and no idea of how to do it, there are lots of dependencies on other (external) resources that I cannot manage, I am not sure I can solve it in a sprint so that story becomes demoable. Then you should ask yourself: is this really a good story or rather an epic? Shouldn't  it better go into a research spike first so that we gather more knowledge about how to do it?

However, when you estimate always take into consideration collateral aspects such as unit/functional/integration test writing (that can take as much or more than time need to code functionality), team communication, code reviews and other things that should be part of your development process.

What do you think?

Cheers,
Dikran.

Tuesday, September 25, 2012

Always Elevated Privileges in Windows 7

As a developer I usually need to execute lots of commands and programs that require administrative privileges. Although my Windows user is in the Administrators group I always needed to do "Run as administrator" on command prompt, text editors and other applications that required privilege elevation, even if they were created by me or by programs that I launched!
After digging a little through Windows permissions system system I found out that in order to improve security and minimize virus propagation risks, the windows team has decided that even if you are in the administrator group, and even if you are the Administrator, there is better to explicitly grant yourself the rights to do elevated privileges in an interactive way, so that underground malicious programs would not be able to go through without you knowing this.
That said, I am quite sure that an experienced user can do fine even without this kind of assistance, especially in corporate environments where almost everything is filtered and secured.

So, to grant yourself elevated privileges without prompt you need to:
1. open security policy configuration, by typing in the command prompt:

%windir%\system32\secpol.msc /s

A window titled "Local Security Policy" should open as below:


2. Navigate to the Security Options node:


3. On the right side click on the "Policy" table header at the top to order alphabetically the entries so that you shall have all entries starting with "User Account Control" easy to spot.

4. Select the entry called "User Account Control: Run all administrators in Admin Approval Mode":


5. Here it is all. When activated, this option instructs Windows to ask for your permission every time when elevated privileges are required, even if you already own them. Double click on the entry and set this option to Disabled. It will require you to  restart Windows in order to get effective:


As a bottom line, be sure that you know what you are doing, as this will downgrade your system's overall security.

Good luck!

Wednesday, May 4, 2011

Tailing log files over SSH in Windows

My problem: I wanted to be able to track logs on Unix machines using BareTail or something similar.
Until now the only solution was to use putty and log console output into a local file, then open it with BareTail.
Disadvantages: for each log file had to open another putty.

Solution:
Dokan SSHFS (SSH File System)
It's a file system mapping application that allows a remote file system map to a Windows local drive.
After I installed the one I could open all 6 log files with baretail as if it was on a
local drive (N).

Installation Steps:
1. Install the NET 2.0 Runtime;
2. Install the Visual C++ 2005 Runtime;
3. Install the Dokan library 0.6
4. Install the Dokan sshfs 0.2;
5. Download the Dohan sshfs 0.6;
6. From 0.6 zip copy DokanSSHFS.exe and DokanNet.dll over the files installed by the 0.2 installer. This is because there is no yet an installer for the 0.6 version;
7. Run DokanSSHFS.exe then choose the remote path, username and password, and choose the letter for windows drive to be assigned;
8. In the options tab to check the cache disable;
9. Click on CONNECT

From this moment you have a new drive in Windows where you can work normally with explorer, BareTail, etc. plus an icon in the Windows taskbar that allows mount/unmount on-the-fly
I hope you will find this useful.

Update:
There is an excellent log viewer called LogExpert that can directly use tailing over sftp on *nix servers. In my opinion this is the most complete log viewer at least for Windows: highliting, regular expression filtering in a separate panel, columnizers and a lot of other useful stuff. Try it with confidence.

Good luck!

Dikran

Saturday, October 10, 2009

How to know when Java Virtual Machine is shutting down

The question is: why would someone need to know such thing?

You already have the jdk's Runtime.getRuntime().addShutdownHook() method that adds your thread to be called when the jvm is shutting down. So why another way?

Well, I would have asked the same thing if I didn't experience an unusual situation.
(For the inpatient reader that wants to skip the story, you can jump directly to the answer)

I was working on a complex web aplication that was running in a jBoss/Tomcat container. We needed a clean server shutdown in order to release  the resources (connection pools, sockets, temporary files, etc). At a certain moment the team members noticed that the undeploy operation worked well while the majority of the shutdowns (but not all) were hanging at a certain point. The displayed reason was a strange exception raised from the AWT thread:

Exception in thread "AWT-Windows" java.lang.IllegalStateException: Shutdown in progress
at java.lang.Shutdown.add(Shutdown.java:81)
at java.lang.Runtime.addShutdownHook(Runtime.java:190)

We could not understand why the exception would come from the AWT as we were not using at all the AWT in our application... Or at least we didn't know it... However, who was calling the Runtime.addShutdownHook() and why was it driving to a jvm crash?

After digging around the problem I discovered that, indeed, we were using AWT, although indirectly, by using a reporting engine. And that reporting engine was using classes of AWT to do it's job. The reporting library that we were using was packaged as a Struts plugin. The plugin's contract is simple: Struts calls the  init() method at startup and the destroy()method at shutdown, on all the registered plugins. The plugin's control class was a singleton that was simply redirecting the destroy() calls to it's internal methods. Of course, after creating an instance of itself.
So what? you may say. This is the normal way to do such things.
Yes, only that, in this particular case, unless you used the reporting feature while working with the application, the reporting engine was not initialized until shutdown. So, at server's shutdown, Struts was calling destroy on all it's plugins and at his turn the reporting plugin, inside it's destroy() method, was calling ReportsInitializer.getInstance().shutdown(). And... Boom! JVM freeze.

Ok, but why? It's just a class instance that does something within itself! What might drive it to crash all the system?
Well, the nice part is just coming. So, we have a system shutdown and a Struts plugin called that is instantiating some class that uses some AWT elements inside. It does not look like something to be scared of... Only that there is a catch: when a class AWT is instantiated, the AWT Toolkit itself adds a ShutDownHook  to the  Runtime. I haven't yet dug inside to understand why. But, as java specs state, it is illegal to add a  ShutdownHook  if the JVM shutdown sequence has already started.
Some people consider that this behavior inside AWT is a bug, because AWT should check itself if the JVM is shutting down before attempting to add its own ShutdownHook.
Ok. now, we found the reason why all happened. What next? I evaluated three possible solutions:
- to add a generic thread exception handler and simply "swallow" this exception.
- to add a myself a ShutdownHook and set some flag on the reporting reporting plugin not to instantiate anymore the reporting initializer.
- to detect (inside the destroy method in the plugin) if the virtual machine is shutting down and skip calling the reporting initializer.

The solution I chose was the last, because the generic thread exception handler couldn't prevent the exception to happen, and for the second option, the order in which JVM calls the ShutdownHooks  is not guaranteed, so you'll never know if it will be called before or after the Struts's own ShutdownHook, making of the solution a non deterministic one.

So I made some research to see if I may possibly know at any time if the Java Virtual Machine is shutting down. I discovered that when java.lang.Runtime.exit(int status) is called, it forwards the call to a class named java.lang.Shutdown.exit(int status), that calls all the ShutdownHooks before calling the native halt() method. Inside this class there are fields describing the state of the system. Unfortunately the class is private package and there is no public method that returns the System's state. But luckily, in Java we have the blessed reflection. So, here comes the answer:


private boolean isSystemShuttingDown() {
  try {
    Field running = Class.forName("java.lang.Shutdown").getDeclaredField("RUNNING");
    Field state = Class.forName("java.lang.Shutdown").getDeclaredField("state");
    running.setAccessible(true);
    state.setAccessible(true);
    return state.getInt(null) > running.getInt(null);
  }
  catch (Exception ex) {
    ex.printStackTrace();
    return false;
  }
}

I would not advise anyone to use this on a daily basis unless he is exceptionally needing such a solution, primarily because:
- the java.lang.Shutdown class is a package private class in the JDK and it may be changed or removed in any upcoming release. Of course, for custom built projects, that usually run for a long time on a single jdk version this is not such a big issue.
- In public distributions there is a chance that the SecurityManager is set to forbid the access to jdk internals. But again, inside custom projects you can configure your own SecurityManager.

I hope that you enjoyed reading this post and I am waiting for your comments.

Greetings,
Dikran

P.S. Thanks to Alex Gorbatchev for his excellent SyntaxHighlighter.