A couple of weeks ago I had my first exposure to the Salesforce CLI Scanner and it’s fair to say I quite liked it – two blog posts in two days is above my usual cadence. The first thing I wanted to do was find out if I could tune it to my specific requirements, which I was pleased to find out I could by creating custom XPath rules and installing them into my CLI instance.
The next job was to introduce them into my workflow and take action when issues were flagged up, which I’ve been working on in what time I could carve out.
Customising the Default Rulesets
Some of the default rules from the CLI Scanner weren’t applicable to all of my projects – an example being the CRUD/FLS enforcement. If I’m not targeting the app exchange, a lot of the time my code needs to be able to carry out actions on specific objects and fields regardless of the user’s profile – I’m relying on running in system mode. I can’t see the point in letting this be flagged up as warnings and then hiding or ignoring them, so I wanted to exclude this specific rule. At the time of writing (October 2020) the CLI Scanner doesn’t allow rules to be excluded or included at run time, so I needed a way to remove them from the ruleset.
After a small amount of digging around, I found the default ruleset configurations in my ~/.sfdx-scanner directory, so I copied them and removed the CRUD rules. Executing the scanner on my sample codebase showed this had no effect, so I checked the file to make sure I’d removed all instances. I found a couple, removed them and executed the scanner again with the same result. Checking the files again showed the CRUD rules were still in place, and this time I knew that I’d definitely removed all references, so I realised that these configuration files were generated at run time so any changes I made were overwritten. Back to the drawing board.
Cloning the Default Rulesets
As PMD is open source, the rulesets are available in the Github repo, so creating a copy was fairly straightforward. I edited the rulesets to remove the rules that I wasn’t interested in, and to change the name – I didn’t want to replace the defaults as there are times when I want all the rules applied.
Something else I learned since my original post is that if you are relying on existing Java rule classes, you don’t need to build a JAR file containing the ruleset XML, you can just install the XML file itself, so that saved me a little time. Once I had the custom rulesets in place I was good to go.
Adding to the Workflow
The CLI scanner is now part of my overnight Continuous Integration processing on our newly launched BrightFORUM event management accelerator. This doesn’t have a vast amount of custom code and we’ve had a very clear set of standards and rules that are applied at code review time, so I wasn’t expecting many issues, and I was pleased to find that was the case. All the violations were level 3 or lower as well, and I’d have been pretty annoyed with myself if there were any higher than that.
Satisfying the Scanner
Clearing the violations was an interesting exercise. Some were very simple, like unused local variables. Most of the time this was overzealous retaining a reference that wasn’t used later, but sometimes it was genuine leftovers from a previous approach. As these had been through code review, I can’t see that we’d have picked them up manually in the future unless we’d made further changes.
Others required more effort, like cyclomatic or cognitive complexity, which typically means methods that are trying to do too much at once. An automated approach helps here as it’s not down to individual developers deciding that a method is a bit long or a bit complicated, and convince others that is the case. A tool flagging this up takes all the emotion out of it. Fixing these required some refactoring, which I had no problem undertaking as we have excellent test coverage to confirm I didn’t break anything inadvertently.
I didn’t change the code in every case though – for example, I had a potential SOQL injection flagged up, as the fields in the query were dynamically generated. The string being used in the query had been generated from the built-in schema classes, a couple of lines earlier, so it was guaranteed to contain only valid field names and there was no chance of unescaped user input or the like being included. I could bow to the scanner and escape the string anyway, but that would suggest to anyone reading the code that it was coming from an untrusted source, which wasn’t the case. This gave my a chance to try out an Apex annotation that two weeks ago I had no idea existed – @SuppressWarnings.
From the Apex docs :
This annotation does nothing in Apex but can be used to provide information to third party tools.
To turn off the violation, I added the following annotation to the method that contained the SOQL query:
and the violation is gone. You can suppress multiple warnings by comma separating them inside the single quotes, and you can also apply this at the class level to suppress all warnings of the specified type(s).
This is something that should definitely be used with care – once I’ve suppressed SOQL injection warnings for a method, any code added that introduces a genuine SOQL injection vulnerability won’t get picked up. I’m typically adding a comment to explain why I’ve suppressed the warning and to say that the method shouldn’t be changed without taking this into consideration. I’m also putting regular time into my calendar to revisit the suppress warnings annotations and confirm that I don’t have a better approach.
I wouldn’t use this annotation to exclude a class from all rules – that could be done, but it would be a big list of suppressions and it would be simpler to exclude the class from those processed. I’d also add a ticket that there is technical debt to be addressed in that class, so that it doesn’t get forgotten about and embarrass me in the future.
The key takeaway from my last couple of weeks experience is that automated code scanning leads to cleaner code – things that aren’t needed are gone, things that were hard to understand are now easier, and I have some custom rules that ensure that concerns are appropriately separated. Including it in my CI ensures that I pick up any issues as soon as they are added to the codebase, and I also plan to have the development team run the scanner before pushing their work. Aside from the initial effort to set this up, it’s pretty much zero effort for a lot of benefit.
Static code analysis shouldn’t replace manual code reviews though – an automated tool is configured to look for specific code/markup that indicates problems. A developer carrying out a code review understands the purpose of the code and can determine if it looks like it does what it is supposed to. Code reviews are also an excellent way to share knowledge in a development team. By adding automation you free up code reviewers from having to check whether every variable is used and figure out what makes a method too complex. By using both you’ll catch more problems earlier, so what’s not to like?