Previous Entry Share Next Entry
multics
jwg

Collaborative software

On Tuesday I went to the Expo portion of Enterprise 2.0 Boston 2012 - essentially a conference on the equivalent of social-networking in business. This used to be called Groupware and the academic side equivalent is CSCW (Computer Support for Cooperative Work). Going to these expos is about my only exposure these days to technology other than what I see on my screen or install on my computers.

I like to ask the demonstrators how their products support review or inspection of documents and relate this to my 20-years ago experience in this domain.

Twenty years ago our Applied Research group in Honeywell-Bull (working with technology from a research group at the University of Illinois) built a working prototype that could be used for the Software Inspection process (a more formal kind of peer review) that some companies like ours were using. The essence of Inspection is that a document (or program code) is given to a small "inspection team" who in the Preparation Phase look at it and make notes. Then a meeting is called and the leader walks everyone through the document and the inspectors raise issues that they found. The team classifies these defects and attempts to determine the cause. Afterwards the author corrects the document or code and the statistics are rolled up so the organization can learn what kind of things are causing defects so they can improve their process. Although designed for software this process is useful for other things such as legal documents, marketing materials, presentations, etc.

In our system, called Scrutiny, inspectors look at the document on their computer and make annotations. Then at an appointed time they all sit at their screens, the moderator zooms his/her mouse over a portion of the document and that gets highlighted on everyone's screen along with the annotations made to that portion and the discussion ensues via the text messaging component to identify the defect and classify it. We first demoed at the CSCW Conference in the fall of 1992. We wrote a few papers - I was usually the lead author and among other places got the this one: Scrutiny: A Collaborative Inspection and Review System accepted to the European Software Engineering Conference in Garmisch-Partenkirchen.

We continued to develop it, got some people in the company to use, got a DARPA grant, and made several efforts to productize it including a possible spinoff- but large-company politics made it impossible so the idea was dead in 1995. I do point out that Network Technology was pretty crude in those days (Mosaic, the first Web Browser first appeared in 1993), so this was a pretty ambitious project.

None of the vendors that I talked to can easily support the synchronous meeting portion. Some use screen sharing which is a partial solution but no-one had the capability of allowing everyone to easily see other people's annotations (in our system this was automatic). Too bad, I still think this would be a useful function. There are some products available today - such as CodeCollaborator by SmartBear software that look like they do a pretty good job at this function but I've never seriously explored it.

(I used the Multics logo for this post - we started doing Peer Review of code in 1968, and had an electronic meeting tool, Forum which we used in the early 1980s, occasionally for peer review.)

  • 1
(Deleted comment)
Notoriously (but not notoriously enough, I guess), groupware was the focus of Engelbart's work back in the 60s. I doubt there was anything in there to specifically support code inspections, but annotation and live remote collaboration were certainly covered.

I think part of the reason why no one was interested in productising inspection is that the kind of Gilb-Graham inspection that you describe (you probably used the IBM guy's name for it but I forget what his name is or, by now probably, was) is so expensive up front that hardly any organisations used it. I learned it from Gilb himself the first few days I was in the UK, and believe that inspection would help to make documents and software better. Doing it right takes a lot of time and effort, and today's Agile methodology seems to think that as long as something is produced and works the bugs can get ironed out in the next iteration.

Every organisation that used inspection correctly and comprehensively found that it saved money in the long run. The pointy-haired bosses, however, concerned with expenses, quickly learned how to sideline it and then kill it dead.

We knew about that! I know jwg from way back in those days at Honeywell (> Honeywell Bull > Bull HN > null set?) At one point the mgt. decided, Hey, we can save money by laying off the tech writers! After all, we only need them when the product is ready for market, so we can hire them back as contractors then.

Of course, they soon discovered that the tech writers have to be following the project development, because they can't document it properly from a cold start on an eight-week schedule (or whatever). So they wound up paying a helluva lot more than they had been for essentially the same service.

They may have applied the same cigol (spell it backwards) to other types of work. I don't remember whether the fellow I was talking with in the cafeteria one day, who described being laid off and contracted back in like this, was a tech writer or what. He said, "I'm in the same office, with the same title, doing the same work I was doing before, and making a lot more money for it." "Oh," I said, "so you're terminate-and-stay-resident." (I wonder how many who see this comment won't be familiar with that expression.)

"Oh," I said, "so you're terminate-and-stay-resident."

Very droll.

I always say that the technical writers are first on the chopping block of the technical staff. Then the testers. Once these two groups are gone, the company slides down the slippery slope and quality and usability go out the window.

Fagan was the IBM guy.

In the Phoenix division of Honeywell where the produced mainframes, one of the senior people there, Ed Weller, collected some data about the cost of finding defects - in the field after products are released, in the test cells via the standard test process, and by inspection. Inspection was a clear winner and he used this data to help convince management that inspection could be done.

When I first started as a programmer we wrote code in pencil, got keypunch operators to punch cards, and then sent the card decks off to a computer in another building. Several hours later you got your box of paper back. Two (or at most 3) runs a day were about what you could get so it really paid to check those cards carefully - since a mere typo or other simple error cost half a day.

Edited at 2012-06-24 04:24 pm (UTC)

Ed Weller, collected some data about the cost of finding defects - in the field after products are released, in the test cells via the standard test process, and by inspection. Inspection was a clear winner and he used this data to help convince management that inspection could be done.

All the studies that I'm aware of on this subject show that finding defects through inspection is the most cost-efficient way of proceeding. It pisses the customers off much less, produces better code, and better designs. The difficulty is that it front-loads the project with time "delays" to do inspections and deal with issues, and with costs that are tangible, while defects found later on or in the field had their own budgets for correction and thus didn't impinge on the project or its manager, who by then will have gone on to a greater and more glorious existence on the next project.

Tom Gilb is a Cassandra on this one. Every organisation he's assisted in design and production has improved afterwards. He bangs on about how inspection and strict methodology (I forget what he calls it) of his own invention will improve the end product. But organisations just cannot see that spending time and money before line 1 of the code is written brings vast dividends.

(Deleted comment)
With regard to your Agile comment earlier, my thoughts are that small, tight iterations can be a good thing, but the methodology as a whole tends toward entropy, not an ordered system. Everything gets pushed to the next cycle, and yes, you may have a product, but it's not a very good one. Funny thing, most people have been so beaten and abused by their software, they sit down, and take it, actually believing it's *good*.

It's always jam yesterday, and jam tomorrow, but never jam today. I agree with your comment to a certain extent: a lot of the crapware out there has not been planned except in a vague, Judy-Garland-Mickey-Rooneyish way ("Let's put on a show!") I believe that Agile is just the current software development fad. It's the computing equivalent of: "It's not about the destination; it's about the journey." which is great for the developers—they get paid to go on the journey, so the longer the better. For the end users, they want to get to the destination quickly and have working software.

Since I was a tester, software doesn't beat me up: I knock it to the ground, stomp all over it, turn it over and kick it in the crotch, and make sure that it doesn't get up again. Bugs follow me like children followed the Pied Piper. It often gets in the way of using software, since I am constantly stumbling over bugs while working.

  • 1
?

Log in

No account? Create an account