At some point during any Salesforce development project it’s highly likely that things will get tricky. New requirements, for example, or complexity that wasn’t completely understood at design time, coupled with a hard end date when existing systems have to be shut down. At this point the temptation will be to start to bypass some of the development process in an effort to save time. This never works — while the actual writing of code might progress faster, all you are doing is pushing problems down the timeline where they are more costly (in terms of effort and time) to fix. The process is there for a reason — it’s the way that your company has identified as the best way to deliver quality software, and it’s your friend, especially when the pressure comes on.
Unit tests are vital
Often the first casualty of the development process when attempting to save time, the need for unit tests is never greater than when you are trying to do things quickly. Contrary to some opinion, unit tests don’t just allow you to deploy your code to production, they go some way towards verifying that your code performs according to it’s contract and in accordance with the requirements (unit tests can never verify that code works correctly, as that would mean testing every possible input and current state combination). When developers are under pressure they are more likely to rush things and miss requirements — good unit tests will pick this up during development. Focus purely on coverage and you will find the issues during QA or UAT, which means that the code has to be fixed, redeployed and retested. Doesn’t sound quicker does it?
Unit tests must pass
When there’s a long list of work to be done, failing unit tests can get put to the bottom of the list or ignored completely. Quickly the team develops the mindset that the tests always fail but these will be easy to fix, and before you know it you only have 34% code coverage and half of your tests are failing. This is usually discovered around deployment time, which leads to a bunch of half-baked coverage tests being thrown together, introducing the problems identified above.
Failing unit tests either means that the code is no longer satisfying the requirements or the requirements have changed without the tests being updated. Until you do the investigation you really have no idea which of these it is, and therefore whether any of your system works correctly. Many product companies institute a code freeze when unit tests fail, where no new code can be added to the codebase until all tests pass.
Don’t comment out unit tests because they are failing — this might allow you to deploy to production, but there’s no way you can be confident that the system will work properly once deployed.
Keep reviewing code
As mentioned above, developers under pressure can make mistakes and miss requirements. Reviews provide a second pair of eyes on the code that may be able to spot problems (there are no guarantees, but it’s likely to be more successful than just hoping the original developer got everything!). My experience is that if people know their code is going to be reviewed they will make sure it’s as good as they can get it before submitting for review, whereas if it is going straight into the codebase an attitude of good enough quickly takes hold. Again, this is around spotting problems early enough that fixing them doesn’t have a huge impact. Once bad code gets into the codebase it takes a while to get it out again.
QA must take place on a stable system
When time is tight it’s tempting to start fixing code as soon as your QA team identify problems. This is fine in a development environment, but no code should change on the QA system until the entire suite of tests is complete. It might feel like pushing changes as soon as they are ready speeds things up, but by doing this you invalidate all completed tests. If you don’t restart QA at that point, once again you don’t know whether your code will work as intended in production. In a complex system seemingly minor changes can have unforeseen consequences.
QA decide if the code is fit for purpose
Your production system is the gold standard and should be defended against releases of questionable quality. When your QA team is telling you that the quality of a release isn’t good enough, they are doing you a huge favour. While letting you go ahead and deploy might allow you to hit your deadlines, once problems start surfacing in production that will be small comfort. Never ride roughshod over your QA team — the only way they should be persuaded to allow a release through is if you can agree with the customer that missing or failing functionality can be worked around. Note that this isn’t solely the decision of the customer — they will also be under pressure to hit deadlines and go live, so you need to push back if you know it will cause problems. “Only following orders” has historically been an unsuccessful defence.
Developers are always overconfident
Even when all previous experience points to overruns, and a development task has turned out to be harder than expected, developers will still be confident that it will all come together in the last few hours. This is pretty much never the case — save them from themselves and plan realistic completion dates based on previous performance.
Talk with your customer
One way that you might be able to reduce the pressure is to talk to your customer — tell them early and they have an opportunity to change things on their side, maybe de-scoping requirements for the initial release, or manually working around areas that aren’t as systemised as they might be. If you tell them a couple of days before go live you’ve given them nowhere to go.
I’m better known in the Salesforce community as Bob Buzzard — Umpteen Certifications, including Technical Architect, 5 x MVP and CTO of BrightGen, a Platinum Cloud Alliance Partner in the United Kingdom who are hiring.
Brightgen Careers | Brightgen
For 10 years organisations have relied on BrightGen to show them what Salesforce — and their business — is capable of…
You can find my (usually) more technical thoughts at the Bob Buzzard Blog