Thursday, November 10, 2011

A day at DBCI

This blog is about pointers to going about implementing DBCI for your applications and is based on a similar objective I achieved in a project recently. It started as a pretty standard application(without a DB) and we had a full fledged CI handling not just builds but remote deploys, selenium tests and performance tests too. We then started getting impeded a bit when we introduced the DAO layer, we had decided to not rely on the JPA's query capabilities (with all due respects they are good guys but nothing works better than Native SQL queries, also you can use the smart annotations JPA provides to map results for your native queries, but again not JPQL, JPQL is a bit over-rated); so the big question was there has to be a better way to test our SQL queries and an automated one at that. So we went about integrating the DBCI to our existing CI implementation. Here's a summary what I wanted to achieve
  1. There should be a way to setup test data for your unit tests.
  2. Have complete control and traceability for this data injected for tests.
  3. Be able to populate and clean your DB in the process of running tests for providing different sets of Data.
  4. All of this above should be automated.
  5. The Unit tests for this should not be very different from how you have been writing other unit tests.  

Normally during the development phases teams tend to have a shared development database instance accessed by all developers, nothing wrong in that in terms of sharing infrastruture but when it comes to relying on this instance for your unit tests this approach is a bit flawed. The reason is in such shared instances you do not have control of the data; the dude next to you might have well deleted the data based on which you wrote your queries. So while "it worked on your machine" the build can fail by the time it reaches the CI server. Also you might be better off running the tests against and database with a smaller subset of data which is relevant to the test scenarios instead of running against a heavy full grown DB (adds to the slowness of your builds). Also you can have an easier way if you had a build which fires up an in memory database, creates the schema, adds the data that you need and runs the test. On a CI environment the same process should be followed as it is the same build script but there you can probably utilize a MySQL better. Gives a uniformity in the way you add test data, add your tests and excute those tests irrespective of environments.

I'll move to snippets for the implementation. Lets say you have an entity as follows:

@Entity
@Table(name = "releases")
@NamedNativeQueries(value = {
    @NamedNativeQuery(name = "releaseStories", query = "SELECT stories.release_id AS release_id, stories.story_id AS story_id, userstory.* "
        + "FROM release_stories stories INNER JOIN user_story userstory ON stories.story_id = userstory.id WHERE stories.release_id =?1", resultClass = UserStory.class),
    @NamedNativeQuery(name = "releaseIterations", query = "SELECT iterations.release_id AS release_id, iterations.iteration_id AS iteration_id, iteration.* FROM release_iterations iterations "
        + "INNER JOIN iteration iteration ON iterations.iteration_id = iteration.id WHERE iterations.release_id =?1", resultClass = Iteration.class),
    @NamedNativeQuery(name = "releaseProjects", query = "SELECT projects.release_id AS release_id, projects.project_id AS project_id, project.* FROM release_projects projects "
        + "INNER JOIN project ON projects.project_id = project.id WHERE projects.release_id =?1", resultClass = Project.class) })
public class Release implements Serializable {

  private static final long serialVersionUID = -843064609186009652L;

  @Id
  @GeneratedValue(strategy = GenerationType.AUTO)
  private Long id;

  @Version
  private Integer version;

  @Column(nullable = false)
  private String name;

  @Column(length = 50)
  private String description;

  @Column(nullable = false)
  @Temporal(TemporalType.DATE)
  private Date startDate;

  @Column(nullable = false)
  @Temporal(TemporalType.DATE)
  private Date endDate;

  @Column(nullable = false)
  private ReleaseStatus status;

  @OneToMany(cascade = { CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH })
  @JoinTable(name = "release_iterations", joinColumns = @JoinColumn(name = "release_id"), inverseJoinColumns = @JoinColumn(name = "iteration_id"))
  private Set<Iteration> iterations;

  @OneToMany(cascade = { CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH })
  @JoinTable(name = "release_projects", joinColumns = @JoinColumn(name = "release_id"), inverseJoinColumns = @JoinColumn(name = "project_id"))
  private Set<Project> projects;

  @OneToMany(cascade = { CascadeType.DETACH, CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH })
  @JoinTable(name = "release_stories", joinColumns = @JoinColumn(name = "release_id"), inverseJoinColumns = @JoinColumn(name = "story_id"))
  private Set<Userstory> stories;
  ...
  

You can see here we have this entity mapped to three other all through join tables. I wont get into the details of the DAO classes much but lets say we had a method like this in a DAO for this entity.

/**
   * @param release
   */
  @SuppressWarnings("unchecked")
  private void findReleaseStories(Release release) {
    Query storiesQuery = entityManager.createNamedQuery("releaseStories").setParameter(1, release.getId())
        .setMaxResults(30);
    List stories = storiesQuery.getResultList();
    LOGGER.debug("Release with id " + release.getId() + " has " + stories.size() + " associated user stories");
    Set<UserStory> userStories = new HashSet();
    userStories.addAll(stories);
    release.setStories(userStories);
  }

Assuming you have a JUnit test (setup using some spring annotations so that all dependencies are injected and you have all that it takes for your JUnit to call the DAO implementation; here's how Maven and some of it's handy plugins can help you achieve that DBCI

Setting Up a clean schema and Injecting data: ALL sql approach

The first step involved in our CI build will be to have a clean schema where we can inject test data.  hibernate plugin serves well here; you can use the schema tool to execute schema creation through an SQL file provided:

<!-- Create/Update the test database instance -->
     <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>hibernate3-maven-plugin</artifactId>
      <version>2.2</version>
      <executions>
       <execution>
        <id>mysql</id>
        <phase>process-test-resources</phase>
        <goals>
         <goal>hbm2ddl</goal>
        </goals>
       </execution>
      </executions>
      <configuration>
       <components>
        <component>
         <name>hbm2ddl</name>
         <implementation>jpaconfiguration</implementation>
        </component>
       </components>
       <componentProperties>
        <drop>true</drop>
        <jdk5>true</jdk5>
        <persistenceunit>hbm2ddl-persistence</persistenceunit>
        <skip>${skipTests}</skip>
       </componentProperties>
      </configuration>
      <dependencies>
       <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.16</version>
       </dependency>
       <dependency>
        <groupId>org.hsqldb</groupId>
        <artifactId>hsqldb</artifactId>
        <version>1.8.0.10</version>
       </dependency>
       <dependency>
        <groupId>javax.validation</groupId>
        <artifactId>validation-api</artifactId>
        <version>1.0.0.GA</version>
       </dependency>
       <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-validator</artifactId>
        <version>4.0.0.GA</version>
       </dependency>
       <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-entitymanager</artifactId>
        <version>3.6.6.Final</version>
       </dependency>
       <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-core</artifactId>
        <version>3.6.6.Final</version>
       </dependency>
       <dependency>
        <groupId>org.hibernate.javax.persistence</groupId>
        <artifactId>hibernate-jpa-2.0-api</artifactId>
        <version>1.0.1.Final</version>
       </dependency>
       <dependency>
        <groupId>org.hibernate</groupId>
        <artifactId>hibernate-c3p0</artifactId>
        <version>3.6.6.Final</version>
        <exclusions>
         <exclusion>
          <artifactId>slf4j-api</artifactId>
          <groupId>org.slf4j</groupId>
         </exclusion>
        </exclusions>
       </dependency>
      </dependencies>
     </plugin>

And here's the JPA configuration to go with it:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
 <persistence-unit name="hbm2ddl-persistence" transaction-type="RESOURCE_LOCAL">
  <provider>org.hibernate.ejb.HibernatePersistence</provider>
  <!-- for implementation specific overrides -->
  <mapping-file>META-INF/orm.xml</mapping-file>
  <class>domain.Release</class>
  <class>domain.Project</class>
  <class>domain.Iteration</class>
  <class>domain.UserStory</class>
  <properties>
   <property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect" />
   <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
   <property name="honernate.hbm2ddl.auto" value="create" />
   <property name="hibernate.hbm2ddl.import_files" value="/test-import.sql" />
   <property name="hibernate.show_sql" value="true" />
   <property name="hibernate.connection.username" value="liquibrain" />
   <property name="hibernate.connection.password" value="password" />
   <property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/liquibrain-test" />
  </properties>
 </persistence-unit>
</persistence>

The SQL file can have not just the DDLs but insert statements too; however be very very sure every SQL statement is in a single line else the tool throws an error. A snipper from the SQL

CREATE TABLE releases (id bigint(20) NOT NULL AUTO_INCREMENT ....... 
CREATE TABLE project (id bigint(20) NOT NULL AUTO_INCREMENT .......
CREATE TABLE iteration (id bigint(20) NOT NULL AUTO_INCREMENT ........ 
CREATE TABLE user_story (id bigint(20) NOT NULL AUTO_INCREMENT .......

INSERT into release values ......
INSERT into project values ......
INSERT into iteration values ......
INSERT into userstory values .......

Now this SQL file is in our src/test/resources directory; so if you add a query which has to be tested in your unit tests; you have to add insert statements to end of this file which gives a uniform way of test data input.

Thats pretty much it the build will now clean the schema and add test data at every run for your unit tests to test things out, you can of course add further tweaks to it by separating it to a profile if the tests start taking too long to execute.

Another way for injecting data can be the DBUnit Maven Plugin where you structure data as XML files. More about that in the next blog post.

Wednesday, September 28, 2011

ControllerClassnameHandlerMapping in Spring

ControllerClassnameHandlerMapping is a very good way when it comes to cleaning up spring xml configurations and as I see it enforcing a sort of convention within your application of building URLs. Detailed documentation can be found here so i'll skip that part and summarize it before I go to other things. Provided you stick to a naming convention with your Controllers, Views it enables creating clean URLs with a sort of uniform access for your application, reduces the amount of code you write. Which is to say that a URL /project will be mapped to ProjectController, and not just that if you've added a POJO bean called project in the Model you return, you don't need to name to view or the attribute you are adding before you forward or redirect to it.

But there are some lesser know things about using which got me stuck before I could fully utilize this feature and hence this blog post aims at elaborating those parts (it is based on all I learned trying to solve issues in my application and doing those long searches in Spring Forums.); it can be my bad Google skills though but I have always been a refer documentation and forums person than search. I start listing it out now:

You will have to set the property default handler when configuring it as a bean in your spring XML if you are using SimpleControllerHandlerAdapter, not specifying a default handler in such a case gives the following exception:

javax.servlet.ServletException: No adapter for handler [your.controller.ProjectController@6d78ddf4]: 
Does your handler implement a supported interface like Controller?
	org.springframework.web.servlet.DispatcherServlet.getHandlerAdapter(DispatcherServlet.java:967)
	org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:760)
	org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719)
	org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644)
	org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549)
here's how you do that
<!-- Bind to a naming convention for controllers and views -->
  <beans:bean id="classnameControllerMappings"
    class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping"
    <beans:property name="defaultHandler">
      <beans:bean
        class="org.springframework.web.servlet.mvc.UrlFilenameViewController" />
    </beans:property>
  </beans:bean>

There is another approach; which lets you through without having to tweak too much in the configuration: use AnnotationMethodHandlerAdapter and annotate the methods to be invoked for a URI through the

@RequestMapping
annotations, your bean definitions then look like as follows

  <!-- Bind to a naming convention for controllers and views -->
  <beans:bean id="classnameControllerMappings" 
class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping">
  </beans:bean>

  <!-- Enables annotated POJO @Controllers -->
  <beans:bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" />
Paths mapped by this approach
ProjectController --> /project /project/*
which means a URL like
/project/create
will be mapped, but a
/project/details/show
won't. So you have to ensure your URLs do not have a third path element. That won't work with this approach and is not a part of this mapping class specification.

Thursday, July 21, 2011

working with the sonar web services API

Sonar does an amazing job at presenting code quality information in different levels of details and different views all of which catering to particular user (Architect, Managers, Developers etc.) It was natural to quickly fire up there documentation on how to write sonar plugins when I started working on making the emails that sonar sends a bit more informative. However, and to my habit of quickly jumping to the complicated way of doing this :-), I realized I could use the web services API to create a simpler solution. So, here's an example on how you can utilize the web services API that exposes violations for a project to create a quick summary of sorts. I used ruby for this (and I confess I have started liking it the most for writing these helper scripts around a development process), but it can be done equally well in Java, Python or the like. And if you don't like XML there's JSON too.

1. a helper method to build URL for fetching violations in the project
def build_violations_url(sonar_url,resource,*optional)
    if resource.nil?
      raise(ArgumentError.new("Project Key/Id for fetching violations cannot be nil."))
    end
    if sonar_url.nil?
      raise(ArgumentError.new("Sonar Host URL for fetching violations cannot be nil."))
    end
    #build URL to query
    query_url = sonar_url + "/api/violations?resource=#{resource}"
    #add all optional arguments
    optional.each do |opt_arg|
      query_url = query_url + "&#{opt_arg}"
    end
    return query_url
  end
So a call like
query_url = s.build_violations_url("http://sonarhost
/sonar","com.foo:bar","depth=-1","format=xml")
will give you the correct URL to hit

2. extract number of occurrences of all the priorities

def generate_violations_table(query_url)
    critical_ctr = 0
    major_ctr = 0
    minor_ctr = 0
    blocker_ctr = 0
    download(query_url,"./violations.xml")
    f = File.open("violations.xml","r")
    violations = Nokogiri::XML(f)
    f.close()
    # collect all criticals
    violations.xpath("//violation/priority").each do |priority|
      if priority.xpath("text()").to_s == "BLOCKER"
        blocker_ctr += 1
      end
      if priority.xpath("text()").to_s == "CRITICAL"
        critical_ctr += 1
      end
      if priority.xpath("text()").to_s == "MAJOR"
        major_ctr += 1
      end
      if priority.xpath("text()").to_s == "MINOR"
        minor_ctr += 1
      end            
    end
  hash = Hash.new
  hash["BLOCKER"] = blocker_ctr
  hash["CRITICAL"] = critical_ctr
  hash["MAJOR"] = major_ctr
  hash["MINOR"] = minor_ctr
  return hash
  end


You can then just transform the hash to a string object and send it in a mail or use the ruport gem to create a more formatted report out of it.

here's my output ( a mail sent using net/smtp)
Mr. Build Manager says you have a look at this:
Summary of nightly Sonar analysis

Violation    Occurrences
BLOCKER    0
CRITICAL    0
MAJOR     1
MINOR     8


 You can view the detailed dashboard here:
http://sonarhost/dashboard/index/1


Happy Coding!
Mr. Build Manager

Wednesday, March 2, 2011

yet another svn change log tool

this one helps organize revisions to path entries based on a commit message. Will work only if you have enforced a pre-commit hook that ensures every commit has a message which starts with some sort of an ID for the change request/feature/defect fix for which the commit is being made.

It's very raw (I am still far away from being a good ruby programmer) but works well for now....

#!/usr/bin/env ruby
require 'rubygems'
require 'nokogiri'
branch_path = ARGV[0]
start_date = ARGV[1]
end_date = ARGV[2]
search_string = ARGV[3]

file = File.new("svn_logs_#{start_date}-#{end_date}_[#{search_string}].xml", "w")
file.puts("")
file.puts("")
file.puts(`svn log #{branch_path} --verbose -r {#{start_date}}:{#{end_date}} --incremental --xml`)
file.puts("")

document = Nokogiri::XML(File.new("svn_logs_#{start_date}-#{end_date}_[#{search_string}].xml"))
path_entries = document.xpath("//path")
entries_array = []
path_entries.each do |entry|
    entries_array << entry.content
end
uniq_entries = entries_array.uniq

changeFile = File.new("change_log_#{start_date}-#{end_date}_[#{search_string}].txt", "w")
revision_separator = "------------------------------------------------------------------------"
changeFile.puts "Branch path: #{branch_path}"
changeFile.puts "Start date: #{start_date}"
changeFile.puts "End date: #{end_date}"
changeFile.puts "Search string: #{search_string}"

for uniq in uniq_entries
    published = false
    logentries = document.xpath("//path[text()='#{uniq}']/../..")
    logentries.each do |logentry|
        if logentry.xpath(".//msg/text()").to_s.downcase.include? search_string.downcase
            puts(logentry.xpath(".//msg/text()").to_s)
             if published == false
                changeFile.puts(revision_separator)
                changeFile.puts "Path: #{uniq}"
                changeFile.puts "Revisions Changed:"
             end
                published = true
                changeFile.puts("\t #{logentry.attr("revision")}")
        end
    end
end
changeFile.close()
file.close()

and here's the output
Branch path: svn://my.svn.com/repo/trunk
Start date: 2011-02-09
End date: 2011-03-02
Search string: S6556
------------------------------------------------------------------------
Path: /trunk/cminput/data/release/1103-release.xml
Revisions Changed:
         57979
         58023
         58049
         58091
         58095
         58225
         58226
         58232
------------------------------------------------------------------------
Path: /trunk/gpd/xml/templates/product-color.xml
Revisions Changed:
         57979
         58006
         58020
         58055
         58087
         58091
------------------------------------------------------------------------
Path: /trunk/gpd/src/main/java/com/someapp/gpd/policy/productcolor/ProductColorPolicy.java
Revisions Changed:
         57979
         58020
         58023
         58055
         58087
         58091