Screen Shot 2013-06-08 at 10.35.06 AMIf you read my previous post (WordPress Recovery), you know I’ve been writing some code to recover my old posts. It occurred to me I could take a small segment of what I’ve been doing with that code to demonstrate my approach to TDD.

Since I’m a hacker from way back, and also because I was in semi-panic mode about losing the content, I didn’t approach this task with testing in mind. Now that doesn’t always result in bad code: I’ve been doing this long enough that I can usually think through a fairly good model and code something that isn’t one long method full of inline code.

In this case however, once I had started coding, I realized again that this was an opportunity to practice coding the “right” way. I had already begun with a Maven project, and generated unit tests as I went through the process of starting to build the code, so I had at least some good functioning unit tests.

Continue reading

I am a big fan of Test Driven Development (TDD) and tools like Hudson/Jenkins to automate the process of having a continuous integration build system are key.

On my current project we recently started moving things to Amazon EC2, and rather than put everything on one big server, I thought I’d follow the best practices in cloud computing and make a number of small special purpose servers to take care of the project’s needs.

We’ve had a Jenkins server running for a bit, so rather than reinventing the wheel, I figured I could copy my Jenkins configuration to a new server and get things up and running.

I fired up a new Tomcat server on Amazon Elastic Beanstalk, and loaded up the Jenkins WAR file, which quickly got me to a working Jenkins server. This project is written in PHP, so I had to install PHP after that, which meant logging in to the server and running through the whole PHP and PHPUnit setup.

Once that was done, I scp’d the Jenkins folders from the old server, edited the Tomcat startup files to include the environment variable to point Jenkins to the right place, changed a few permissions, and everything appeared to be working.

I could log in, I fired up the build, and it appeared to be running – very cool.

But that was when the flaw in the design of the PHP unit tests was exposed ….

I was watching the output of the phpunit tests, and noticed two things:

  1. The tests seemed to be taking a really long time
  2. Every test was failing

Watching the console, each time a test would fail, the little “E” would print, then a few seconds would go by and another “E” would appear. Finally after many minutes (because we have a LOT of classes to test) the error output appeared, and looked something like this for EVERY test:

And of course there were 5297 of these … I did some Google searches for the PHP_Invoker_TimeoutException which mostly pointed to issues with upgrade from one version of PHPUnit to another, but the versions on the old server and this one were the same.

So my next step was debugging an individual tests. Running the test from the command line gave me the same error, odd. But then I ran the test using php instead of the phpunit call, and found the problem: I was getting a timeout trying to open a database connection.

The issue as it turns out, is a design flaw in our code that hadn’t showed up before: all the classes invoke a database connection class that sets up the connection to the database as soon as they are loaded.

Since the Elastic Beanstalk server was in a different security group than was allowed to connect to the RDS database, it was unable to connect at all, and PHPUnit would simply timeout before the connection failed (by default phpunit sets 1 second as the acceptable for a test to run in order to catch endless loops).

Now in theory our tests shouldn’t be hitting the database (at least not for these unit tests since we don’t want them updating anything on the backend), so this problem turned out to be very fortuitous. Because the Jenkins server couldn’t reach the database, it exposed a flaw in our unit tests: we weren’t mocking all the things we needed to, so the tests were actually opening connections to the database.

With some refactoring of the test classes to mock the database access layer, the tests all succeeded. Next we’ll need to do the actual DBUnit tests for the database, and Selenium or HTTPUnit tests for all the front-end and AJAX stuff.

Just a quick note – I was guided toward doxygen, which is a tool that does source code documentation. It has a DMG installer, that includes an executable to run doxygen with a wizard.

But since my purpose was to run doxygen in an Ant build script, I needed to install the command line version.

I simply looked inside the application, and was able to see that the executable was in /Applications/

Adding a symbolic link to /usr/local/bin (which is in my PATH) did the trick:

Enhanced by Zemanta

I’ve been working with Office and Project for a while now, and one of the things I love about the new 2010 version is that you can save as PDf from almost any application.

So today I was saving a Project plan as a PDF, and noticed it was breaking across pages in weird ways. Since I’ve done something similar with Visio a LOT, I figured the control for page size would be in the page setup, but I couldn’t find any mention of page setup in the menus or ribbon bar …

So hunting a bit, I figured that it might be on the print preview, which I found on the “File” tab:

And that’s when I got stymied for a bit: the controls for page setup were all greyed out:


Finally it occurred to me that maybe I needed to have Print Preview working, and off to the right was a button that said “Print Preview”:

Clicking that activated all the controls, and I could see a preview of the printout, and even get to the handy dandy Page Setup (as well as the other page settings):

For this particular one, all I wanted to do was change the page size to be bigger (so I chose the 11×17 in landscape), and limit the dates to this contract year.


But clicking the “Page Setup” also lets you do things like scale the printout to fit the page, set margins, etc:


Once you have everything the way you want it on this page (IOW the image of the printout on the right looks good), you can click “Save As” and choose PDF as the format you want to save:



The newly saved document will be scaled and limited to what you chose on the File/Print/Print Preview settings !

Enhanced by Zemanta

I recently read a post on LinkedIn on the WAFUG group by Andrew Hedges:

Frameworks or libraries?

“Frameworks are larger abstractions than libraries. Abstractions leak, cost performance and take up mental resources.” …

Is the whole framework craze overkill for most projects? At what point does it make sense to use a framework over libraries that just do what you minimally need? Is it better to start with a large, leaky abstraction or only impose it if/when the project gets big enough to need it?

The answer to this for me is “it depends”, and it’s what keeps architects up at night. Frameworks are usually attempts to encapsulate some best practice or design patterns in a way that will help the developer achieve some gain in productivity, scalability or reliability.

During the dotBomb era, I worked for a small boutique consulting company designing web sites for a lot of startups. All of them were convinced they would be the next big thing, and most of them had extremely aggressive targets for scalability.

Because of this, we spent quite a bit of time with various frameworks trying to understand the balance between the complexity introduced and the scalability that the framework would give us. At the time, Sun was busy pushing the EJB framework, which was a study in how to over-engineer software. The big benefit promised by EJB was that you could theoretically scale your application to infinity and beyond. The downside was that it basically took a huge team of rocket scientists to get off the ground (this was not the nice simple POJO based EJB of today).

What we found was that in most cases, we could get the same sort of scalability for our clients out of simple Model 2 JSP based applications (MVC approach) by taking care to design the application from the database forward. By using the database for what it is really good at (caching, storing and reading data), building DAOs to interact with the database, and factoring the business logic out of the JSP’s, we were able to build a reliable MVC framework that we used with many clients. The framework was very similar to Struts, which didn’t yet exist, and which we started using once Struts2 was released.

Turns out that the amount of traffic that you have to experience to require the overhead of a complex framework like EJBs is not realistic for any but a handful of web sites.

Fundamentally as an architect, it’s my job to figure out what the real problem is, to solve that in a way that will answer the business need (immediately),  and to build it in a way that will allow for some level of deviation from today’s needs (future scalability and flexibility). So for almost every case that we came across, there was too much complexity and overhead in the EJB framework to make adoption a valid choice back then. Not only did an EJB design require significantly more developer work than the alternatives (making it more costly and harder to change), the initial application wouldn’t perform as well (since it was burdened with all the code that was required to make it scale seamlessly).

All of that said, EJB is also a great study in how a framework can be improved. With the current value proposition of EJB3, the objections that were so clear before have gone away: it no longer takes a rocket scientist to engineer EJBs, and in fact any complexity is fairly well hidden from the developer. Most of the overhead of the framework has been moved into the run-time, so it scales much more appropriately.

As an architect, my decision becomes much easier to include a framework like EJBs when it both saves development time, and gives me a clean and simple path to change. There’s always a balancing act between the time to market, and anticipated change in the application too. I worked on some applications that used MS Access or Domino to get off the ground quickly because when used as a framework those applications are great RAD tools. You can prototype and get to the 95% level of usability for an application very quickly, and for many apps this is better than good enough.

The problem with these (as with almost any framework) is when you get to the point you need to do something that the framework wasn’t designed for. You built a killer app in Access, and now you want to roll it out to the entire company. Well, even though Access claims to work for multiple users, turns out it is a pain, and uses some very archaic methods for things like sharing the database. And even if you reengineer it to store your data in an enterprise DB, it still has the problem of needing to be deployed somewhere or being shared (which again runs you into the locking and corruption problems).

Every problem requires thought and some knowledge of the limitations of the tools in your tool box. By understanding the business problem, and coming to a reasonable understanding of what sorts of changes you can anticipate (including unanticipated ones), you can choose the right level of complexity to solve the problem for today and tomorrow.