Wrox Home  
Adam Kolawa - Co-Founder and CEO of Parasoft
Adam Kolawa is the co-founder and CEO of Parasoft. Kolawa, co-author of Bullet-
proofing Web Applications
(Wiley, 2001), has contributed to and written hundreds of commentary pieces and technical articles for publications, such as The Wall Street Journal, CIO, Computerworld, Dr. Dobb's Journal, and IEEE Computer. He has also authored numerous scientific papers on physics and parallel processing.

Articles by Adam Kolawa

Making Your Automated Build System Work for You

By Adam Kolawa, Co-Founder of Parasoft

When the appropriate policies are followed, an automated build process should provide early detection of incompatible changes in the application components, ensure that the application continues to run as expected, and detect any errors introduced by newly integrated code.

At regularly-scheduled intervals, the automated process should access the most recent set of source files from the source control system, and then perform all tasks needed to build the application (including compilations, initialization, linking, transfers, and other tasks needed to construct the application). Depending on the nature and complexity of the application being built, the automated build could take a long time and could involve multiple machines. If multiple versions of the product must be built, the build should automatically construct all required versions. In any case, messages related to build completion and issues should be made available to a designated team member; for example, results could be directed to a file, or e-mailed to that team member.

Builds can be automated using scripts, Makefiles, and build tools such as Ant. After you have a process for automating all build tasks, you can use utilities such as cron (for UNIX) or the Windows scheduling utility to ensure that the necessary tasks are performed automatically at the same time each day.

To get the most out of your automated build system:

Create a special build account
Before you start implementing an automated build, it's a good idea to create a special account for running the build. If you run the build from a special account, you eliminate the possibility of developer configuration errors and make nightly builds portable.

Clean the build area before each build
Cleaning involves removing all elements of the previous nightly build(s) - including sources, binaries, and temp files. Old files should always be removed before a new build begins.

Shadow or clone the source code to the build directory After the build area is cleaned, the source code that needs to be built should be copied to this directory. This copying can occur through shadowing or cloning.

Shadowing involves getting the latest project sources from the source control system. The sources should be stored in a directory that is accessible across the network. This way, if you have multiple machines running nightly builds, all nightly builds can access the archived source (or individual files) that were shadowed on the original machine.

Cloning involves copying previously shadowed source files over any existing files in the build directory. This process is called cloning because the same source archive is used for multiple machines (or multiple platforms).

If you have more than one machine using the same code for a nightly build, it is a good idea to shadow it on only one machine, then clone that shadowed code on the other machines. This way, the same code is used for all builds, even if changes are introduced into the source control system between the time the first machine shadows the code and the time the last machine accesses the code. If you create a source tar file to archive all latest sources, then the other machines can clone the build environment by getting the archive tar files.

Build the application each night, after cleaning the build directory and shadowing or cloning the source code
Building is the process of actually constructing the application. It can be as simple as executing make on the build directory. The builds should occur automatically each night, without human intervention, so that the team always has a recent build that includes the most recent source code modifications (or so that the team knows immediately when source code modifications cause the build to fail).

Integrate testing into the automated build process
For maximal effectiveness, automated build processes should automatically test the newly built application to verify that it satisfies the quality criteria that the team manager and architect deem critical; at least, it should run all available test cases and report any failures that occur. By integrating testing into the build process, you can verify that no code has slipped through the tests that developers are required to perform before adding their code to the source code repository.

Often, groups shy away from integrating testing into the build during development and requiring that code pass designated tests in order for the build to be completed. They assume that as code is added and modified, errors will inevitably be introduced into the application, and builds will fail frequently. However, these build failures are a blessing, not a problem: If there is a problem with the code, it is best to discover that problem as soon as it is introduced, when it is easiest, fastest, and least costly to fix.

Completely automate the build process
Manually performing all the necessary steps correctly, day after day, is not only tedious, but also error-prone. When the process is automated, you can rest assured that the necessary steps will be performed correctly and consistently, day in and day out. Scripts, Makefiles, and build utilities, such as Ant, can be mixed and matched to automate the process to the point where all necessary cleaning, shadowing/cloning, building, and testing steps can be executed from a single command. Moreover, cron (for UNIX) or Windows scheduling utilities can be used to ensure that an automated process is executed at the same time each day.

Use a hierarchy of Makefiles
When you use a hierarchy of Makefiles, you have one low-level Makefile that builds each part of the application, then a high-level Makefile that can call the low-level Makefiles in the designated order to build the entire application. Each developer needs to create a low-level Makefile to build his work within his sandbox. The high-level Makefile can then build the entire application by simply calling each of the existing low-level Makefiles in the appropriate order. If a developer implements a modification or correction in a low-level Makefile that is called by a high-level Makefile, that change will automatically be applied when the entire application is built - there is no need to directly modify the high-level Makefile.

Parameterize scripts and Makefiles
Parameterized scripts and Makefiles will be portable in a multi-machine/multi-user environment, where all the different machines have different directory structures. For example, one way to parameterize these files is to use $BLDROOT as the environment variable that represents the root location of the nightly build source. When this location is a relative or parameterized project root path, the build process will work on any machine that has the correct $BLDROOT environment variable - even on machines that do not have the same directory structure as the initial build machine. In other words, it allows the build process to be completely independent of the machine's directory structure.

For n-tier applications, establish and build to a staging area as well as a production area When teams are working on n-tier applications, such as Web-based applications, the automated build process should be capable of building the application on a staging server as well as on the production server. The purpose of a staging area is to provide a safe zone where the application modifications can be thoroughly exercised and tested before they are made live. This way, errors can be found and fixed before the official deployment. Some files (such as static Web pages) can be thoroughly tested without a staging area, but dynamic functionality (such as login functionality, checkout operations, and so on) cannot. The staging area should look like the actual application but should contain copies of the same components used in the actual application.

Fully integrate automated builds with the source control system
An automated build does not exist in a vacuum. Everything that is related to the automated build — all the scripts, Makefiles, and other resources that are required to automate the complete build process at a scheduled time each day — should be stored in the source control system.