• Welcome to Jose's Read Only Forum 2023.
 

Software Testing

Started by Charles Pegge, November 14, 2007, 11:39:03 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Charles Pegge

Becoming a Software Testing Expert

An entertaining & energetic exposition by James Bach, that puts this subject in a new light.

http://video.google.com/videoplay?docid=6852841264192883219&q=Google+techtalks&total=43&start=0&num=10&so=0&type=search&plindex=7

Kent Sarikaya

THanks Charles, I will watch it in my next video watching moods.

Charles Pegge

Agile Testing

Elisabeth Hendrickson


Another enthusiastic discourse on Software testing, - this time focusing on Agile and XP (Xtreme Programming) and the benefits of these methodologies in project development.


http://video.google.com/videoplay?docid=-3054974855576235846&total=36&start=0&num=10&so=0&type=search&plindex=7

Kent Sarikaya

Added to my watchlist, thanks Charles.

Charles Pegge

As every programmer knows, testing and debugging are very time consuming, often taking the greatest  portion of time in a project. So any genuinely good ideas are always welcome. The systems described in those videos are for team programming, but most of us in the BASIC domain are solo operators. So adaptations are required for this.

In BASIC I find incremental programming to be the most stable and productive - adding tiny amounts of code then testing, but some algorithms, especially those with multiple loops, like a binary sort, have to be written in one piece - their intermediate results are often too complex to check.

Never had much success with top-down programming, - the model inevitably mismatches what is actually required and one reverts to writing from the bottom up to get the most effcient and stable components.

Donald Darden

Too often the people given the task of defining the software requirements have no concept of how the program will actually be used,  The first generation of software used by one company assumed that the unique id given to each new order could be used to classify the agency and class of service being installed.  That is, The id's would be assigned in batches to different agencies, the subdivided further into class of service before being actually used for service orders.  But different agencies also have subagencies, and several agencies and subagencies can reside at a given facility, and agencies order basic services rather than define what they will be used for, plus some agencies needs are much greater than others, and you cannot predict how many ids will actually be needed during the life of the contract.  Consequently, ids were just assigned randomly, with different order entry clerks seeking out blocks of free numbers that did not seem to be getting assigned by other clerks.  The system was not designed to allocate numbers for you, so you had to randomly enter numbers until you found a free one to assign,  So as a consequence, we could not even tell how many numbers or what sequence of numbers were being associated with anything, and all the follow-on code that was suppose to help us manage accounts based on groups of id's was useless.

The second generation of code ignored any dependencies on id's but another constraint of the first generation code was that new services were hard to define and enter.  We had to use comment fields to communicate any specialty requirements, and the software did not support screening or manipulating records based on entries in comment fields.  So the second generation allowed you to specify all sorts of unique parameters in conjunction with each other to correct this problem.  Only now you had to have extremely well trained and informed clerks to make the right choices, and they skimped on training.  The result was that clerks just filled in blanks and made choices that were sufficient to get the order into the system without regard for what purpose or consistency existed between different fields, and there had to be a lot of communications back and forth before the customer got the exact service that was requested.  It should have been obvious to any person trained on managing data that they would have benefitted from a template process, where service experts could have constructed exact templates by service type, then clerks merely fill in the specific fields related to that given service.

Anyway, those are just a couple of real world examples where things went wrong.  Consistently though, you will also see that the initial requirements of a contract specification is not what the ultimate users need or desire, because they were not party to the specification process.  Their managers and directors make the key decisions without regard to the end user.  More often than not, the contract specifications will undergo some changes and portions of the project will have to be revamped before you get a useful end result, and even then it will only be tweaked enough to make it serviceable.  Most people who get involved with writing performance requirements seem to have no clue as to what would really serve to make the process flow smoothly, or in what areas that moving the process to computers would benefit them the most.