Rick Winfrey

SimpleCov, Flay, Flog and Saikuro

November 28, 2012 » 5 minutes (964 words)

Today, Micah asked me to use four Ruby code analysis tools on my Ruby tic tac toe program. I was excited about this, because I have never used any code analysis tools before, and I was curious to see how my code would fare.

The first tool I used was SimpleCov. It measures code coverage, and like most things related to software development, is a topic that could be studied in depth on its own. What is particularly great about SimpleCov is that after applying its heuristics in determining how well your tests actually cover your code, the tool generates a nicely formatted html doc that displays a summarized result. Every file is given a percentage of coverage, so that files with weak coverage can be easily found prompting the developer to write more comprehensive tests. Being so new to TDD, I wasn’t expecting great scores, but I was happy to see the overall average of coverage for my tic tac toe program was around 90%.

The next tool I used was Flay. It looks at the structure of your code and helps identify portions of duplicated code, or methods that structurally behave in similar ways to other methods, possibly allowing for refactoring of those methods into slimmer, cleaner methods. The output of this tool wasn’t as impressive as SimpleCov, but I was happy to see that only my presenter class registered as a good candidate for refactoring. Throughout writing this program I tried to maintain a strict object boundary between the game logic and the output and input, but there are parts of my presenter class that I can see are smelly. I struggled to think of ways of maintaining this boundary, so I used dependency injection to help contain the passing of messages from certain parts of the application to my presenter object. Still, I feel like I’m not quite seeing how best to do this, and is something I’m looking forward to talking with Micah about. After running the Flay analysis, it was affirming to see a result that confirmed what I already perceived as being a weakness of my application’s design. Afterall, the first step to solving a problem is first knowing what the problem is.

After Flay, I used Flog. Ruby has a certain reputation for having a wide and diverse collection of gems. Basically a gem is a library or a package that can be added to an application’s environment. Some gem names are fun and descriptive: httparty, while others are neither: Mandy. Ruby also has some easy to use tools for managing gems and versions of Ruby. If you’re new to Ruby and are curious how that works, both Bundler and RVM are considered essential tools, although lately I’ve been meeting more and more Rubyists who prefer using rbenv (for various reasons, including wanting to use Fish or Zsh shells (RVM only works in bash, or because RVM is sometimes criticized for changing aspects of the shell environment – for more on the differences, I found this post highly entertaining).

Flog’s job was to use an ABC Metric to analyze code size and return a score of how “painful” the code is. The higher the score, the more flogging a developer’s mind will have to take in order to navigate and understand that code. One breakdown I found to help see how good or bad my flog score was is this short post. Before running Flog, I was expecting the AI class to have the highest flog score, because it contains the minimax method. Minimax is the longest method with the most branching I have in the tic tac toe application. I was pleased to see the averaged flog score for my application was better than I expected, and not surprised that my minimax method received the highest flog score.

The first three tools were fairly straightforward to implement, but Saikuro was a little more challenging because it is incompatible with Ruby 1.9 (the current version of Ruby). Switching to Ruby 1.8.7 was no big deal, and after verifying Saikuro’s executable file was in my load path, it was smooth sailing. Saikuro is a cyclomatic complexity analyzer, and measures code complexity. I don’t really understand how cyclomatic complexity analysis works, but from the basic understanding I have I started imagining a tool that could represent a cyclomatic complexity analysis as a visual graph, showing how various parts of the application connect to each other as nodes of a graph (perhaps something to try at a future Waza). I didn’t expect much in my tic tac toe program to produce a high complexity score, except for the minimax method - which my Saikuro results confirmed. This tool was the most opaque to use, as I was never able to successfully use the -h flag in order to see its help explanation (I was curious to see what flags it offers in order to learn more about its functionality).

These tools were fun to learn about, and they introduced me to several new ideas. I had never heard of a cyclomatic complexity analyzer or ABC metric before today, and these tools gave me new ways to think about and break apart my code in order to find the smelly spots. However, the real measure of how I did will come from code review with Micah. I’m very excited to have my code dissected and have someone with many years of experience shine a light on the parts that could be better. As an apprentice, I hope to soak up and learn as much as possible from my mentor, and the people around me who have already traversed the hill.