Category Archives: Features

Yes I Khan

Recently, I mentioned to a colleague that I wanted to learn to develop JavaScript programs. It’s not that I want to be a JavaScript developer; rather, there are a couple of simple projects that I would like to bring onto the front burner, and I wanted to try doing-it-myself rather than outsourcing the effort. The icing on the cake was it would afford me an opportunity to take a walk in the shoes of the developers and subject matter experts (SMEs) on my project teams that I find myself collaborating with regularly.

My vision was not to become a JavaScript guru; and I wasn’t looking for certification. On hearing this, my colleague suggested the on-line training centre, Khan Academy. It would provide the basics I was looking for; and, best of all, the price was right—it’s free (although donations are encouraged). Perfect!

Those with children may be familiar with The Khan Academy—these are the same folks that provide on-line tutorials to help kids learn math. I hadn’t realized they had training in other subject matter areas. How much could I really learn from a kids’ study tool, though?

A lot as it turns out. Over the course of the lessons I’ve gone from understanding the fundamentals of programming to creating some relatively-complex object-oriented programs. The lessons are comprised of tutorials that are demo-based, and include audio that’s up-beat and light making what are sometimes complex or abstract concepts fun to learn. Each tutorial is followed up with a coached challenge to reinforce the just-learned content; and, at the end of all the tutorials and challenges in each lesson, a project is assigned in which you must create a program using the concepts from the lesson.

To be sure, the content is targeted at kids, with Pokemon-type characters popping up to provide words of encouragement and direction. But, so what? I’m still accomplishing my goal; and it’s a refreshing change from the often dreary and mundane adult training programs that are out there.

Because Khan is web-based, I can complete the lessons whenever/wherever I want. I usually login from home for an hour in the morning before the workday begins; however, I have just as easily logged in from Starbucks. Also, the tutorials are always available, in a chronological order, I can go back and replay them when I get stuck.

Has it been worth the time invested? I think it has. I now have the basic programming skills I was after (the outcomes from my labors will ultimately become part of the Professional Services Plus website–stay tuned to PSBlog for more on this as we approach the release date). In addition, when development teams are walking me through technical designs and prototypes or discussing issues, risks and changes, I now have a better context within which to receive and frame this information.

You can find out more about The Khan Academy at khanacademy.org.

FEATURE: Think you’ve got your requirements defined? Think FURPS!

One of the best ways I’ve found to ensure all the bases are covered when defining or reviewing system requirements is to use the FURPS checklist.

Created by Robert Grady, FURPS is an acronym for:

Functionality:   This is the one most of us jot down when defining requirements. It answers the question, “What do I want the end product to do?” In addition to considering product features and capabilities, remember to think about what level of security is required.

Usability:         Who will use the product? How will they use it? What look-and-feel do you want? What about help screens and self-help “wizards”? One often overlooked area is that of user documentation and training–often sub-projects unto themselves!

Reliability:        What is your expectation in terms of system up-time? What do you consider an “acceptable” system failure? How quickly should the system be able to recover from a failure? What should the mean time between failures (MTBF) be?

Performance:   Consider the functional requirements you have defined. What level of performance are you expecting? Think about speed, efficiency, availability, accuracy, response time, recovery time, and resource usage.

Supportability:             How easy should it be to test the system; and how would this be done? What about maintenance–what’s your expectation in terms of system care and feeding? How configurable should the system be? What about installation–who should be able to install it?

Grady’s FURPS definition actually includes a “+” (FURPS+) lest we forget to consider:

+ Design Constraints: Anything the design team should be aware of in terms of how you would like the system designed?

+ Implementation Requirements: For example, do you expect the implementation team to adhere to a standard?

+ Interface Requirements: Any legacy or other external systems the product should interact with? How / When should this interaction occur?

+ Physical Requirements: Material? Shape? Size? Weight? (This one’s more geared toward hardware requirements)

I’ve just scratched the surface in this post–each of these areas easily warrants a dedicated article to elaborate on details. For a far more detailed explanation than I could ever provide here, check out Robert Grady’s book, Practical Software Metrics for Project Management and Process Improvement, (Prentice Hall).

MOTIVATE the people you work with

Here’s a great way to remember some key success factors to keep in mind when motivating others:

M – Manifest confidence when delegating (if you believe they can, so will they)

O – Open communication (Can you say transparency?)

T – Tolerance for failure (Success is often based on learning how NOT to do it)

I – Involve others (We’re in this together)

V – Value efforts and recognize good performance (Can you say “Thanks”?)

A – Align business objectives to individuals’ objectives (Are we on the same page?)

T – Trust your team and reciprocate by being trustworthy (Critical to motivation success)

E – Empower your team (Don’t micromanage!)

 

- based on The Human Aspects of Project Management by Vijay K. Verma

FEATURE: Dr. Winston Royce on Managing the Development of Large Software Systems

Father Guido Sarducci, in presenting his idea for a Five Minute University, explained Economics in simple terms: “Supply and Demand. That’s it.” (Five Minutes doesn’t leave much time to devote more to the subject.) Funny thing is, he was right. These are the cornerstones upon which everything else is built.

Which brings me to a white paper I’m reading: Managing the Development of Large Software Systems, by Dr. Winston Royce. It’s a fascinating read on what (and what not) to do to ensure success when developing incrementally. Dr. Royce describes all software development as having just two, essential steps:

“There are two essential steps common to all computer program development, regardless of size or complexity. There is an analysis step, followed second by a coding step,” he says.

Of, course, he goes on to explain that, while these two steps are sufficient for very small, low-risk projects, limiting larger projects to these two steps dooms them to failure. Go figure.

To reduce the risk of failure, larger projects require more than the basic steps—notably requirements definition steps before the analysis step; a design step in between the analysis and coding steps; and a testing step after coding. Also, as anyone familiar with iterative development techniques will tell you, if we develop iteratively, with each iteration building on the deliverables of its predecessor, we can reduce risk further because we move our development baseline forward with the completion of each iteration—that is, the more we build, the less risk there is to contend with.

There’s one hitch though: Because our view of the over-all solution is limited to the code delivered  with the iterations that have been completed, any testing is also limited. Thus, it is entirely possible that, when testing Iteration 4, we discover a fundamental flaw related to work done in iteration 1; which, in turn, could require significant rework of components from iteration 1 on-ward.

Says Dr. Royce:

“The required design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violated. Either the requirements must be modified, or a substantial change in the design is required.”

To respond to this risk, Dr. Royce suggests five steps:

First, introduce a “preliminary program design” step between requirements generation and analysis to determine storage, timing and operational constraints (a.k.a. Performance and Supportability); and hone the design by collaborating with analyst input. In other words, establish a baselined architecture to minimize architecturally-significant risks. To ensure success, Dr. Royce suggests three, key factors to ensure success:

  1. Begin the process with the designers
  2.  Design, define and allocate the data processing modes (Architectural Candidates)
  3. Write an overview document (a.k.a. Vision) so that everyone has an elemental understanding of the system.

Second, document the design. A lot. “Management of software is simply impossible without a very high degree of documentation,” he says. Why so much documentation?

  1. Evidence of completion. As opposed to a verbal statement of completion status (“How a far along are you?” “I am 90 percent done, same as last month”), design documentation is forces designers to provide clear, tangible evidence of completion state that management can base decisions on.
  2.  When documented, the design becomes real. Until that happens, the design is nothing more than people thinking and talking about the design. (Reminds me of the astronomers’ adage that, if you didn’t write it down, it never happened.)
  3. Downstream processes (development, testing, operations, etc.) require strong design documentation in order to succeed. Take testing, for example: “Without good documentation, every mistake, large or small, is analyzed by one man who probably made the mistake in the first place because he is the only man who understands the program area,” Royce says.

Third, “Do it twice”—that is build a functional prototype that simulates the high-risk elements of the system to be built; then, once you are satisfied that the high-risk items have been addressed, proceed to build the real thing—the version that will be delivered to the customer. Royce is careful to point out the unique background required by project staff involved at this stage:

“They must have an intuitive feel for analysis, coding and program design. They must quickly sense the trouble spots in the design, model them, model their alternatives, forget the straightforward aspects of the design which aren’t worth studying at this early point, and finally arrive at an error-free-program.”

Fourth: plan, monitor and control testing. Two of Royce’s suggestions for success are:

  1. Testing must be completed by independent testing specialists who have not contributed to the design based on documentation created in earlier stages.
  2.  Visual code scans to pick up code errors. Again, this should be done by someone who’s not as close to the code as the developer. Can you say verification testing?

Finally, involve the customer—early and often. Says Royce, “To give the (contractor) free rein between requirement definition and cooperation is inviting trouble.”

Does any of this sound familiar? It should—it’s RUP 101. Here’s the kicker: Dr. Royce wrote this paper in 1970. His white paper can be found at: Managing the Development of Large Software Systems