…meie igapäevast IT’d anna meile igapäev…


How to improve software quality?

quality.assurance This is a part of an internal memo I wrote – for a layperson in software development. Please note that these are my ideas and don’t pretend to be the absolute, ultimate truths – and they are largely related to a project I am working on (I deleted project-specific details and examples from this post).

1. Tester

This is the single most important thing. Every software project needs a tester, dedicated to finding errors, and assuring quality. The cost of her/his pay is insignificant, compared to the time needed to fix the errors that are found later in the client’s live environment. Not to mention, it will be better for the company image to release applications that have less bugs, and have happy customers.

testerThe tester will inevitably be the person who knows the software best. She is the one who knows business rules behind reports, or what should happen if user clicks that button or this button, what SQL procedure will be executed if that window opens. It is impossible to develop quality software without a tester.

The tester can (and will) be the one who deals with the customer support – and, as she knows the application best, she can also write the help and documentation – all that is something that programmers abhor and try to avoid at any cost.

2. Analyst and architect

Article about waterfall software development model was originally published as a negative example – but over time it has become clear that there is no better substitute for it in big, complicated projects.

Iterative development/agile development/extreme programming (see here for a brief overview) can reduce the cost and development time – but it has almost always higher error rate and less customer satisfaction (more iterations means constant updates for the client, fast development cycle means that documentation and help is almost inevitably outdated and so forth). Also, iterative development means that programmers must always work very closely with the client and have excellent knowledge of both the program and business rules behind it.

Furthermore, the more complicated the program is (and becomes), the less “payoff” will there be from iterative development. In short – for small and medium-sized projects, iterative development can be an excellent idea, but for big, highly complicated applications it is a deathtrap. Also, iterative programming requires very good analysts and software architects.

Where I am going with all this – hire good business analysts and architects. Every mistake during pre-development is costlier – both financially and time-wise – then mistake made during the development. Most Estonian software companies use a weird mix of iterative development and waterfall model, which seems to work best for medium-size companies and medium-size projects – however, they’ve not actually thought about the model or documented it – and software development just happens.

I am not sure, if the following story is true, I’ve only heard secondhand (or thirdhand. Or fourthhand) rumors about it – since banks are always very hush-hush about their software, that is not a surprise.

Few years ago, one of the biggest banks in Estonia decided to try this new popular thing called Extreme Programming. As no one among their otherwise very good development team was thoroughly familiar with it, they hired a foreign Extreme Programming specialist to be an architect and manage the team. All started out nicely – first versions worked well, the foreign specialist was eloquent and dressed sharply (always very important for bank managers)… but after six months, the software that worked so well at first had become slow, buggy monstrosity – feature creep, if you will. Why? Because they had kept adding features – and although original architecture was good, it was not capable of handling all the features that were added afterwards; nor were the coders able to work through all additions and changes while keeping the old functionality intact. In the end, they scrapped almost the whole code and rewrote the software from scratch, using the old waterfall model. At least it was a software for internal use..

This is not an issue with Extreme Programming per se – it is an example of why you need good analysis and architecture – and a programming model that matches the software in making.

3. Peer code review

code.reviewCode review is very cheap to implement, and invaluable for quality. Another programmer will look randomly at a programmer’s code (non-blocking review). It doesn’t matter if that other programmer will spot all the mistakes (he almost certainly won’t) – but knowing that another programmer will look at his code, all programmers will pay more attention and write higher quality code. It will add just few percent to the development time, but will force programmers to write clean, commented, readable and reusable code.

4. Code rules

Not draconian, horrible rules like “one comment per every three lines of code”, but guidelines – some of which need to be enforced (“every method/procedure needs to have an introductory comment”), some that are more relaxed (“string variable names should start with s (“sName”), integer variable names with i (“iAmount”)”). Rules should be created by mutual agreement, separate for all programming languages (C# and SQL cannot have same rules). Rules need to be easily accessible (intranet wiki is perfect for this).

That makes it easier to debug the code – and a new programmer can understand the code much faster. Zero cost to implement (few hours for someone to write a draft, hour for all the programmers to discuss).

5. Unit testing

nunit.logoUnit testing is more time-costly and only feasible in certain situations. However, it will reduce the number of “accidental bugs” almost to zero. At minimum, all programmers should be at least familiar with unit testing and create unit tests for central procedures – which tester or programmer can use after changes. Unit testing is central to iterative programming (test-driven development, TDD)

6. Miscellaneous small things
  • Automated testing. Great for repetitive testing, but the downside is that it may pass certain unforeseeable mistakes and UI issues. Many issues with Windows Vista are from relying overly on automated testing. It cannot replace human tester, but can be utilized for tedious basic testing.
  • subversion_logo-200x173 Version control. Needs to be used religiously, also for database. Will not improve quality per se but will help to deal with errors faster, by making it easy to revert to a working version, see changes between versions easily et cetera.

    I always thought that rumors about programmers who think that version control is redundant are urban legends – but recently I actually met a programmer according to whom “version control is for the weak”. He also thought that SQL JOINs are unnecessarily complex and unreliable, much better to use subqueries. He had more then seven years of professional experience…

  • Specialization. Modern environments, languages and tools are often too complex for one person to comprehend and be good in all of them. For example – my skills with SQL are passable, I can do everything that is needed, but it takes more time for me then it would for a better, dedicated SQL-coder. On the other hand, my .NET skills are decent, I can probably code in it faster and with better quality then that better SQL-coder. If a project has both database and web programmers, it gets done faster, with higher quality and fewer errors. However, overspecialization is a bad thing.

    I know a programmer who hates SQL. He does everything that is possible to avoid it. And as a result, his code tends to have stupid shortcomings. Otherwise he is a good programmer – probably better then I am (well, he also dislikes UI design – which I like a lot). Combine him with database coder and you have an efficient and strong team.

    On the other end of the scale is a PL-SQL programmer who asked from me about a week ago if all operating systems come with Java preinstalled…

  • Training. Even if the topic of the training isn’t immediately needed in programmer’s daily work, it will force them to learn and memorize “old” things better, improving their work quality. At my previous job, every programmer was eligible for paid training in one year in sum up to their monthly pay. That does not mean everybody got it – alas, in Estonia the quality of IT training is rather basic and more experienced programmers were simply unable to find courses they needed and wanted. Every programmer should be forced/encouraged to get at least one good (i.e. not BrainBench, but Microsoft, Oracle, Sun or Zend) certificate every year.

    Also, let them buy books. Average IT-book costs about $30..50 – only a fraction of programmer’s daily pay. But one book can be read by several programmers – and you can also do “book reviews”, where programmers tell about books they’ve read (I must admit I am not a big fan of those even though I enjoy lecturing to my peers, as you can even see from this blog post :P). In general, let each programmer buy at least four books every year – this means you will have a decent technical library in a few years. And it will pay for itself in the long run by making your programmers much better.

  • Internal technical documentation. Again, wiki would be perfect for this. Must-have for all projects, in my opinion. Also, “tricks” and “useful information” should be there. It makes it easier to find critical information quickly, even simple stuff like server logins, database connection strings and so on.
  • Decent tools. Yes, you need to invest money in those. However, in most cases the amount is less the programmer’s monthly pay – and using better tools will give better results. No programmer should be forced to code in Notepad – unless s/he wants to. Plugins for Visual Studio, database tools and so forth. Just make sure that he actually needs it and doesn’t want it only because it is “the latest thing”.
  • ISO certificate for the company. This will not improve quality on itself, but it will force the company and its workers to look at their work practices and streamline them.
  • Talk with the systems administrator during in-house testing. He will be the one who has to deal with the load your application puts to the server – or at least, he is the one who can tell how much resources your application uses. If he says “it’s crap”… back to the drawing board.

7 kommentaari »

  1. I quess iterative and agile programming require corresponding project management paradigms, to reduce the error rate and achieve positive results – for example Goldratt’s Critical Chain Project Management technique. Not that I know what I’m talking about :P

    Kommentaar kirjutas Mongoolia Surmauss — 2008-04-08 @ 10:34:08 | Vasta

  2. REal Programmers don’t use source tools. They patch code memory image with debugger in live environment ;)

    Kommentaar kirjutas Offf — 2008-04-08 @ 13:33:16 | Vasta

  3. Offf: kliki lingil ;)

    Kommentaar kirjutas dukelupus — 2008-04-08 @ 13:41:03 | Vasta

  4. Pest Control St Paul MN…

    By far, the most frequently accessed part of WordPress is the Write screen. It gets the job done, but its myriad options can be overwhelming. The new write screen only displays the information that you’ ll use most often. It displays the most common fi…

    Tagasisideviide kirjutas Pest Control St Paul MN — 2008-04-18 @ 08:56:05 | Vasta

  5. It is ok to improve the quality.But i want sample execution of improving quality.

    Kommentaar kirjutas Jack — 2008-06-09 @ 11:05:17 | Vasta

  6. […] ennegi kirjutanud ühest Eesti esimesest ekstreemprogrammeerimise projektist – ja nagu siis, ei tea ma ka praegu […]

    Pingback-viide kirjutas Ekstreemprogrammeerimine: head, vead ja inetused. Teine osa « …meie igapäevast IT’d anna meile igapäev… — 2008-06-20 @ 12:40:52 | Vasta

  7. I disagree with your comments on Unit Testing. There are many advantages of unit testing, I will give you examples for two.

    1. Unit tests will force you to design your system in a way that provides more opportunities for unit testing, i.e. less dependencies in code. This in my experience will lead you to follow the single most import design principle SRP (Single Responsibility Principle).

    2. Large systems do change and need to be maintained for long periods of time, which leads to code decay. In order to protect your code from decay refactoring is essential. Units tests will protect you and give you confidence to do large scale refactoring.

    Kommentaar kirjutas Haroon — 2011-05-13 @ 21:32:27 | Vasta

RSS feed for comments on this post. TrackBack URI

Lisa kommentaar

Täida nõutavad väljad või kliki ikoonile, et sisse logida:

WordPress.com Logo

Sa kommenteerid kasutades oma WordPress.com kontot. Logi välja /  Muuda )

Google+ photo

Sa kommenteerid kasutades oma Google+ kontot. Logi välja /  Muuda )

Twitter picture

Sa kommenteerid kasutades oma Twitter kontot. Logi välja /  Muuda )

Facebook photo

Sa kommenteerid kasutades oma Facebook kontot. Logi välja /  Muuda )


Connecting to %s

Create a free website or blog at WordPress.com.