Ideas on Enterprise Information Systems Development

This blog is devoted to ideas on Enterprise Information Systems (EIS) development. It focuses on Lean Thinking, Agile Methods, and Free/Open Source Software, as means of improving EIS development and evolution, under a more practical than academical view. You may find here a lot of "thinking aloud" material, sometimes without scientific treatment... don't worry, this is a blog!
Every post is marked with at least one of Product or Process labels, meaning that they are related to execution techniques (programming and testing) or management techniques (planning and monitoring), respectively.

Wednesday, December 15, 2010

Enterprise Information Systems Patterns - Part II

The project in Github reached an initial implementation, which I separated in a branch named sketch, given that it is mixing configuration and instantiation. The master branch now is in accordance with the proposal, which I continue to discuss below.

Remember that the core idea is to use the concepts as Lego parts. This mean avoiding using subclasses in general, but masking the abstract concepts through configuration. After configuration, I need to implement specific functionalities for the concepts.  These functionalities will be implemented in two places:
a) At instances of concrete concepts, by defining specific behavior for
different contexts (paths).
b) At path objects, in the form of coordination code, which will make the path''s
movements collaborate to realize a business process.

In that way, we are going to have a two-phased development/customization process:
a) Configuration: defines descriptors, which represent concrete uses of the
abstract concepts. Descriptors list the types used to transform the abstract
concepts into concrete ones. Configuration is done through a Domain Specific
Language (DSL), having each concept a template text to be used for its
configuration. In the future, we expect to define a proper grammar for this DSL.
b) Implementation: uses descriptors to make the concrete concepts instantiable
and implement the specific code related to their concrete use. Each concept has
a callable object with a proper name, which is defined during the implementation
of user stories.

Thus, in a first moment, a domain specialist will configure concrete concepts
using the DSL. Configurations are reused during the implementation, when
user stories instantiate and define the specific behavior of the concrete
concepts.

Right now, we have implemented Resource and Node, as well as the Maskable superclass, which will serve all concepts. I can say that the project is in the configuration phase yet, meaning that we haven't implemented any code for "using" the concepts in some business process.

Regarding the DSL used for configuration, we have to investigate its relation with our BLDD proposal. Remember that this proposal is used for implementing business processes, while the current DSL is used for configuring objects, which occurs before implementation of the processes. Maybe an execution DSL which uses the same terms used for configuration will grow from this investigation.

Thursday, December 9, 2010

Assorted Thoughts on Agilism X Traditionalism - Part V

Am I against UP, PMBoK, UML, and Certification?
(a post that summarizes some opinions of mine)

No, I am against the way these techniques were transformed by some people. I call it the Plague of 40-hours-courses: people hear about some "new" and fashionable technique, and rush to attend to some expensive one-week course on this technique. They don't try to learn from other sources, at most they buy some book to glance sometimes. Usually, they buy expensive tools that "make your process adherent to something-fashionable"*, believing that if they use the tool correctly, everything will work fine.

If you don't practice, if you don't try things, if you don't drink from different fountains, you won't have a nice process, because organizations differs from each other, and there isn't something like the Lapis Philosophorum, which will turn any development process into gold.

The typical managers' answer to the advice above is " we don't have time for experimentation", and then they buy packaged courses and tools, which won't work well many times, making managers go to the next phase of the project for process improvement, which is either blaming the team, either blaming the technique. Others start doing bad things, using the new technique to justify them.

Managers shouldn't waste time controlling small things, but studying better ways of making things self-controlled, on making their teams work more productively and with more quality, or, in other words, becoming the facilitator of good practices. Things like checking if everyone is filling their daily working time in some "project management" tool does not aggregates value to clients. In fact, doesn't aggregate value to you and your team. What all this has to do with the theme of this post? The answer is that if managers used their times to understand and adapt techniques to their process, instead of acting as controllers, they would use these techniques in better ways.

The way people transform good ideas into waste-generation machines is amazing! So, people transformed
(i) UP phases into waterfall cycle.
(ii) PMBoK into a heavy and expensive annotation effort.
(iii) Modeling into the main software development activity.
(iv) Certification into a reason for not evolving processes.

Regarding these techniques, I think that
(i) UP is a nice list of "things to worry about during product construction". But you don't have to use all of its workflows exactly how they are presented. UP is a framework, therefore, it exists to be adapted.
(ii) Similar to UP, PMBoK is a nice list of "things to worry about your process management", but you don't need to use all of its documents. PMBoK is a framework, therefore, it exists to be adapted.
(iii) Modeling is for understanding the domain area and also for communication. The users' demands are the focus, and users need running code, not models. Check out Martin Fowler's UML as Sketch principles.
(iv) Certification is a moment to review and document your practices, but you should never forget of continuous improvement. More important than saying to others that you work well is actually working well! IMHO, certification would be a process of checking the quality of products and of asking the clients about your service. Snapshots of the documents used in your process don't provide any real guarantee of quality in the long, or even the medium, term.

*In fact, worse than the Plague of 40-hours-courses is the Plague-of-Tools-That-Make-Your-Process-Adherent-to-Some-Fashionable-Technique. Some years ago I heard a happy colleague saying that "could finally understand object orientation", after attending a course on an UML case tool...

Sunday, December 5, 2010

Assorted Thoughts on Agilism X Traditionalism - Part IV

Some days ago I bumped again into an "old" internal discussion by email on the Agile 2009's talk "Lets stop calling it 'agile' ", which summary I copy below:
"
Agile development has grown a lot since its rebeleous 2001 start. In fact, it has grown to be the mainstream way of developing software. The time has come to drop the word "agile." Agile development is just modern practices in software development. There is no need to explicitly mark practices as Agile. There is no need anymore for opposing camps. Keeping the word Agile and things like "the Agile conference" is holding the development of modern SW development practices back. This session will be in debate form to discuss the above mentioned motion.
"
Some can find this radical. When I first read it one year ago, I also thought the same. However, reading it now I can clear understand the proposal, and it made me remember of my way through software processes.

During my undergraduation course I had contact with Yourdon's Structured Analysis as well as Object Oriented Analysis & Design, by that time (1990-1994), the first was mainstream and the second a new proposal which needed to be improved. However, the second one attracted me more, given that I wanted to use Object Oriented concepts the most, since my first contacts with C++ in 1991 I was already convinced that OO was a better way of programming.

First Experiences
Outside the classroom, my first professional contact with "analysis" occurred in 1993, when I took part of a Management Information System (MIS) development in Clipper and Structured Analysis and Design. On those days of waterfall cycles, someone did the analysis and design and I implemented the system. As expected, after deploying it, users asked for changes almost every week. Firstly I  implemented them and once a week updated the modeling artifacts, for maintenance purposes . Yes, first I changed the programs, because my users were eager to get the software running with the changes, and them, when had time, updated the models. Looking back I see how natural is the idea of using models only to understand the problem, and updating them when needed.

In early 1994 I inverted positions, acting as the analyst for another MIS, using Structured Analysis and a classical waterfall cycle. The implementation was successful until at least its deployment, later in the year. After that, I lost contact with it.

In the second semester of 1994 I entered a bigger Enterprise Information System project, with circa 15 developers and two consulting companies working on it. It was very uncommon environment, using IDEF artifacts for modeling, C++ for programming, and Sybase. Transformation from functional models to C++ methods were done during interesting CRC Cards sessions, and class diagrams were derived from Entity-Relationship Diagrams. Object-relational mapping was done according a set of very basic rules, created internally by the team. The only problem we had was the fact that some process models were prepared almost one year before we started to touch them. However, we had a very good business analyst (he was an Industrial Engineer) who re-checked all business process with the clients before they enter the programming phase. Therefore I can say that the process was interesting, however, the "waterfall way" of doing things created a lot of rework.

Before I forget, effort estimating in all cases was done by "expert opinion",  people didn't believe in Cocomo's use of constants and in Function Points counting - which needs a lot of things already defined to have a chance of being "accurate".  I could only remember my Software Engineering classes and started to see them as something closer to Alice in Wonderland than to an Engineering discipline.

Enters Object Modeling Technique (OMT)
However, it was only in 1995 that I had the first chance of using professionally an OO method in the whole process, specifically, I used the OMT method. OMT was interesting because it used a mixture of class and state diagrams with Data Flow Diagrams (DFD's) to describe its Functional Model. OMT was a product of its time: OO languages were becoming mainstream, but Structured Analysis was still the mainstream process paradigm.

Our Environment in 1995
In this year I moved back to my hometown, aiming to take part of the creation of the first big and public university in the region. We were a two men team, in partial time, working at the Math Laboratory - from where the Computer Science Laboratory would emerge years after. The mission was to develop a library management system to be used by the three campus' libraries. The university was new and many, many information systems were necessary. Library Management Systems where really expensive by that time, thus the university's president asked us to create the system.


Our Process in 1995
We decided to use a mixture of Prototyping Life Cycle with OMT's artifacts to build the system, since we had very few knowledge in the problem area (libraries). For more complex operations, we would use CRC cards to define how classes would interact. We decided to use Delphi 1.0, which allowed us to create nice GUI as well as use Pascal, instead of Visual Basic (which I hated). Moreover, my colleague was developing an object-relational mapping library in Pascal for his Master's Thesis, which gave us a cleaner and powerful way of doing Object-Relational Mapping (ORM). We resisted the temptation to use Delphi's components for direct access to the database for the sake of really using object orientation for coding business logic.


Firstly we drafted a list of high level requirements - separated in modules, and then we asked the librarian to prioritize them. Then we started to follow a process that we built on top of our experiences in previous systems:
1) Get the next priority.
2) Detail its requirements.
3) Create a prototypical GUI and a draft database to reflect the requirements.
4) Re-check requirements during GUI presentation to the user.
5) Model stuff in OMT.
6) When appropriate, use CRC cards to define classes' responsibilities and collaborators.
7) Write code ignoring the database*.
8) Write ORM and database code.
9) Present the implementation to the user.
10) If the user was happy with the results:
a) we deployed the new module and went back to 1)
else
b) we looped back to 5) if modeling was needed, 6) if changes in classes were likely to occur, even without new modeling, or 7) if only changes in algorithms were needed.


* We used an ancestor technique of what is called today "test doubles". In our case, collections of objects with typical values were created "by hand", so that we could test the algorithms. When we put the algorithms to work properly, we then implemented the persistence code, and finally we took way these "fake objects". Sometimes I developed algorithms and my colleague put things to work with the database, since he was the ORM library developer. We didn't want to waste time with persistence problems while we were trying to solve users' problems. Persistence was a pure technical question, not a main development concern.

Effort estimation was done before each of these "proto"-iterations, by experience, because I kept the same opinion regarding effort metrics. The project was really successful, we both left the university in the end of 1996, however it was used for years, until the university bought a new system, with external support from a development company.

I used this process more or less unchanged until 2000, when I had access to my first UML Case Tool, and started reading about the Unified Process and Eriksson-Penker's business modeling extensions for UML. I knew UML since 1997, however, by that time, and until 1999, I had only one project, which used OMT already, and I had no reasons to change from it to UML. I used UP variations until 2006/2007, sometimes without the expected output. And all times not fully implemented, since I thought many things were too bureaucratic to be used.


The main thing I learned from these days was that you must mix techniques and even create new ones to make the development process work for your environment, and not the opposite. Hardly you will find a process that fits exactly to your environment, team, culture, and problem. I've seen a lot of people eager to follow some guru or certification, and when things go wrong they blame themselves for not being able to do exactly "what must be done." They should blame themselves for not being able to adapt the process to their environment.


Another thing I can clear identify now is that some recent Agile techniques are modern versions of old programming tricks, those kind of hints shared during the coffee break, and that never appeared in the "software engineering" books.


Going back to the "drop the word agile"
I think my intimate history in dealing with development process is the same of a lot of others developers. In early 1990's, there were a lot of different Object Oriented notations and processes, all seeking to solve the same problems, in slightly different ways. Hybrid methods such as OMT (and its DFDs) were told to be the ideal solution for a world in transition. And for me they seemed to be.

Then came UML and UP, both bringing some standardization to the software development world. Finally, Model Driven Engineering (MDE) and Model Driven Development (MDD) could realize their full potential. And they did, given that I think their potential has finished. At least, the potential to be innovative in research terms, and to answer to constant changes in practice terms.

I agree with the phrase "Agile development is just modern practices in software development." Yes, one should try to list a single innovation that came from the MDD world in the last few years. Every "new" MDD technique that I saw recently, either was repetition, either was "extreme cases", hardly useful.

Many people still resist to Test Driven Development (TDD), for instance, because they can't understand how tests can be used for design. There is a common misunderstanding that design is related only to blueprints, thus, design = modeling in software development. What about Stress Analysis, mostly based on Math and Physics, instead of drawings... so design is not only about blueprints, and, actually, if you can rely on Math to do the design, it is more likely that your design will work. As I said before, coding is closer to Math than modeling.

For a developer to understand how TDD can drive design, he/she must practice it. You can draft a model to understand the problem in the beginning, however, given that you are going to interact with tests - which are code - and the code that answer to them, you are going to evolve the design as the need for testing evolves. An tests evolve as the implementation of requirements evolves. The problem is that many decision makers are those that "don't program anymore", so will "never" be convinced, unless some programmer show to them how things work.

There is also an economic and cultural reason for people not recognizing agilism as the "modern" software engineering: agile techniques mostly were - and still are - nurtured by individuals and communities of developers, in opposition to big consultancy companies. Because of this, Agile techniques will take longer to enter the EIS world. People will repeat to the exhaustion "Why big companies like A or B don't use a lot of Agile if it is so good?", and will answer to themselves "because it doesn't scale!". Isn't because you don't need expensive tools and related consultancy to adopt them? Isn't because of culture? Isn't because a lot of people who stopped programming after becoming project managers are afraid of seeing part of their daily activities labeled "waste", thus endangering their jobs?

I witnessed similar reactions in the early 1990's regarding Object Orientation, and if the "legacy guys" were right, I would be saying stuff like "Structured Analysis rules!".

So, drop the word Agile!

Wednesday, December 1, 2010

Enterprise Information Systems Patterns - Part I

This post aims to start the discussion on design patterns for EIS. We have just started a Github project, which right now is just a rough sketch, however, it already shows some basic ideas, implemented in Python with BDD (specloud, should-dsl, and Lettuce). By checking the features (*.feature files) you can understand the patterns requirements, and by checking the specs (*_spec.py files) you can understand how the code implements them.

We decided to start by revisiting ERP5 concepts, separating them from ERP5's underlying platform, Zope, and, in some cases, even reinterpreting them. The concepts are the following:

Concept 1: Resource
A resource is anything that is used for production. It can be a material, money,
machine time, or even human skills.
Concept 2: Node
Is something that transforms resources. For instance, a machine, a factory, a
bank account.
Concept 3: Movement is a movement of any Resource from one Node to another.
Concept 4: Path*
It describes how Movements are used to implement business processes.

ERP5 also has the concept of Item, an instance of a Resource. Until this point, we
decided not to use this, otherwise it would be necessary to create classes for
instancing the other three abstract concepts.

The core idea is to use the concepts as Lego parts. This mean not using
subclasses in general, but masking the abstract concepts through
configuration. For instance, a Movement is firstly configured as a concrete
movement, such as transferring goods from supplier to customer. After that, it
can be instantiated. Of course, at some point extra coding is necessary, and we believe that methods for implementing specific algorithms will emerge
on the coordinator element, the Path instances.

We therefore define two new "patterns": Mask, which means to define an abstract concept and turn it into a concrete concept by a proper configuration, and Coordinator, which is an object (also maskable) that coordinates others to realize a given business process. Maybe these patterns already exist with others names and/or similar purposes, however, we will check this more carefully as the project goes forward (we are really iterative & incremental).

The motivation for this project is that people like to say that frameworks are like Lego, but when you need to subclass their concepts, instead of using them as is, I think this is not the "Lego way" of doing things. We only use masks to provide useful labels for users, but the intention is not to extend the original concepts.  When I mask a Movement as a Sale, I am only setting its mask attribute to 'sale'.

The only concept that will be extended is Path (nothing is perfect!), since it will be used to implement the specific algorithms necessary to appropriately integrate movements - realizing the business process.

We probably won't supply something really useful for programming, because we are not considering platform issues, therefore, if you need a ready to use ERP framework in Python, check ERP5. The intention is to discuss how to build highly configurable frameworks, instead of proving a real framework. After we get the basic configuration machinery done, we are going to use BLDD to implement didactic cases: from the business process modeling to coding and testing.

(The Change Log maps this series of posts to commits in GitHub)

Saturday, November 27, 2010

Proof of Concept for the BLDD Tool - Part VI

We have released a compilation of the posts with some more explanations on our proposal for the Business Language Driven Development (BLDD) draft, the technical report can be found here.

There is a lot of work to do on top of BLDD, such as:
- Defining in more detail the use of a companion textual Domain Specific Language (DSL). Probably with support of a Rule Engine.
- Implement an usable tool, we are investigating an extension of Drools for this purpose. Drools seems to be a good choice because it is BPMN compatible, implements a Rule Engine, and an Eclipse plugin. And, of course, it is open source.
- Implement BLDD on top of an ERP, our choice is ERP5, for the reasons listed on the first post of this series. This case study should show the applicability of the proposal.

You can find more "future work" in the report. We are open to collaboration and discussion.

Wednesday, November 24, 2010

Assorted Thoughts on Agilism X Traditionalism - Part III

This blog focuses on Free/Open Source (FOSS) and Agile/Lean (AL) for Enterprise Information Systems (EIS). NSI has been using Free/Open Source since its creation, in 2002 (I use since 1998), however, AL is relatively new to us. I think it is interesting to tell how we "realized" AL.

Our Turning Point

In a broader sense, we started using automated testing in 2005, when we were developing a system for vibration analysis in LabView, which uses G, a visual language for Virtual Instrumentation. For us to test it, we had to simulate working gas turbines, which means to generate matrices with thousands of lines and dozens of columns, representing the vibration signals produced by dozens of sensors. In fact, we had to develop a complete simulation module that excited the analysis algorithms. Curiously, parts of this module were created before the algorithms, because we needed to understand better the behavior of a real equipment, in a kind of "proto" test-first way. Also, we filled vectors and matrices with trustful results obtained from laboratory's hardware and software, and automatically compared with our results, to check our algorithms' accuracy.

In the realm of Enterprise Information Systems, our use of automated testing was initiated only in 2007, when we started two important Enterprise Content Management (ECM) projects in parallel. We started to automate the "black box" tests - because we had no patience to simulate dozens of time the same users actions - and load tests, for obvious reasons. However, our unit tests weren't disciplined and sometimes were hampered by the platform we used. Obviously, the acceptance tests started to find wrong behavior in the software, given the lack of complete unit testing. By that time (with results published in 2007/2008), and in parallel, we developed a hybrid development process for the ERP5 system that used MDD (including a nice set of supportive tools) with automated testing.

The year of 2008 represented our turning point, because of three factors:
(i) We realized that our MDD-based process for ERP5 was sound, however, our industrial partners needed something leaner, and MDD didn't made much sense when developing on top of a highly reusable framework. Moreover, refactoring made modeling a waste in general. We realized in practice that a good design can be done on top of code, instead of models.

(ii) I managed to bring to the team an associate researcher who was (is) a very competent developer, and was (and still is) experiencing very good results by using Agile Methods. He made me remember of my Master Thesis (1997), which proposed a hybrid push-pull production planning and control method: instead of using complicated production programming algorithms, which needed very accurate (and usually inexistent) informations, Kanbans were used to smooth the production flow.

(iii) I read the book Ghost Force - The Secret History of SAS, written by Ken Connor, and realized that trying to use conventional methods for managing unconventional projects and teams was stupid. After exposing my impressions on some team coordination techniques narrated by the book, we immediately adopted two principles*: a) one man, one vote (the Chinese Parliament) and b) the one that holds more knowledge in the specific mission, leads it, doesn't matter his/her rank. Of course, we already knew that the most valuable asset of any team is its human resources.

*strangely my team didn't accept things like parashooting training, as well as other really interesting techniques presented in the book ;-)

So, I can say that we woke up for agile techniques as a whole in 2007/2008. In 2008 we started to use a Scrumban process, a mixture of Scrum for planning with kanbans for programming the production. Things weren't enforced, we realized that disciplined automated testing and Scrumban was superior for our environment, and that's it. When I say we, I am referring to everyone, from vocational to master students and researchers (one man, one vote, doesn't matter the rank...). In 2009 we started to use BDD, actually, we developed a tool stack for BDD in Python to serve to our projects.

In other words, although we are a small organization (circa 30 developers in different projects, divided into development cells, plus some 10 aggregated people), we came from a traditional way of thinking (but we never were bureaucrats) that in fact never worked well for us. In other words, we use AL because we realized in practice that it is better for us, not because it is fashionable. An interesting point is that our last paper on MDD + Automated Testing was cited by a very well known Software Engineering researcher as one of the references for modeling EIS.

We studied and tested the techniques by ourselves, using the Chinese Parliament and The Best in the Mission is The Leader principles to put them in practice. We did right and wrong things, adapted methods, mixed techniques, created tools for BDD, for improving programming productivity, for automated testing, for kanban control, and many others, only to cite a few, and we also used other people's tools. Basically, we discussed a lot. In other words, instead of following some manual, or standard, or course, or pursuing some certification, we created project management knowledge! A good result of this is that all five students that left our research group this year are working for innovative organizations, with salaries higher than the average. Moreover, they got their positions in highly competitive markets, including Europe. They were all hired in the same way: staying some days in the organizations, programming with the teams and taking part of meetings.

An intimate commentary: now I feel totally comfortable to teach project management and tell my daily experiences. Before that I had to teach techniques that didn't work for us, but I had to insist on teaching them because the books said that they worked - while feeling as an incompetent project manager. Now I can see that it doesn't work because this type of process cannot deal well with uncertainty in general (maybe really I am not a good project manager, however, now thing work well for us anyway).

A last commentary: although we are part of a teaching & research organization, we've been developing serious projects with companies and government agencies for years. When I say serious, I mean projects with deadlines, fixed budgets, lots of users, need of continuous support and evolution, and that are expected to aggregate value to the users' organizations. Every single software we developed since 2002 is/was either used internally - in our development process - or by our industrial and government partners. Moreover, since we develop free software, our products are also used by other people and organizations besides our formal partners.

Sunday, November 14, 2010

Assorted Thoughts on Agilism X Traditionalism - Part II

Back to Basics

Management has three basic functions: Planning, Execution, and Control. We can summarize the basic differences between traditional methods and agilism/lean in a table:


Philosophy
Planning
Execution
Control
Traditional
-Based on effort metrics to plan the whole project

-Change is a “risk”
-Design = models
-Try to automate programming
-Centralized decision-making
-Quality a posteriori, by annotation, and responsibility of a specific team
-Centralized
-Try to be static (follow the original plan)
Agile/Lean
-Based on collective evaluation and on measuring the current development velocity
-Plan for the next period
-Embrace changes
-Design = code
-Automate repetitive tasks
-Decentralized decision-making
-Quality a priori, automated, responsibility of producers
-Decentralized
-Try to be dynamic (follow the demand)

In my opinion, the difference is basically one: Agile and Lean seeks to be dynamic, Traditional seeks to be static.

Friday, November 12, 2010

Assorted Thoughts on Agilism X Traditionalism - Part I

Some days ago I took part on a debate on Agilism X Traditionalism for Software Quality Assurance. Firstly I accepted to take part of it after its organizers changed the original name: "Agilism X Software Quality", as if Agilism were against Quality! Yes, there's still people that thinks that Agilism means non-organized, low-quality development, simply because agilists don't like to take a lot of notes or of using a lot of models. It reminds me of the early days of Free/Open Source, when most people that were against it never took part of a FOSS project, or even "touched" any open source software.

I read somewhere that Agilism (like Lean Thinking) means a change in the mindset. That's why, IMHO, that many people cannot understand how Agilism works. The bureaucratic mindset cannot understand how a technique that doesn't use a lot of annotations, long meetings, change management boards, hierarchical planning etc can work.

That's the same mindset found on USA's automobile industry in the 60's and 70's that was defeated by Japanese car builders. USA industry couldn't understand how a plant controlled by paper cards could work better than the enormous management structure they had, which was matured on top of decades of experience in building car - after all, they had Ford!

But the Japanese did.

And they still do, in spite of the current Toyota's quality crisis, its followers still reign in the automobilistic industry. I see people from the traditional methods in software industry with the same arguments:

-Agile doesn't scale = JIT doesn't scale.
-You need to plan in detail beforehand* = JIT has no planning tools.
-You need to take notes of everything you do otherwise you will loose control = for JIT taking notes is considered waste most of times, do things in a way you don't need to take notes.
-How can quality be controlled by developers, instead of a separate group = how can workers control quality, instead of Quality Control Department.
-etc

*(although detailed and accurate plans for software development are unfeasible in real word)

The truth is that things like automated testing, continuous integration, refactoring, and all other modern development techniques were born in the Agile cradle. These are Product related tools. Traditionalists still play with Process related tools, repeating the "if the process is good the product will be good" mantra. Yes, we need a good process, sure, however, you must try to avoid become mesmerized by the beauty of complex project management processes. Instead of putting a lot of people to take notes of a lot of stuff and correct your planning all the time, try to avoid problems, instead of trying to correct them. The focus must be on the product, therefore, product related tools will bring much value to your production chain than process related tools.

I believe that the only really scientific way of producing software is through Formal Methods (FM), because it represents the use of Mathematical principles for building software. If someone can use the term Software Engineering, are the "FM guys". The problem with FM is that they are still quite hard to understand and apply, and there are only a few people really capable of using them to produce software. While FM doesn't produce things like the tables that engineers use to simplify their calculations, FM won't reach the mainstream. FM methods are still dependent on developing "ad-hoc" mathematical models, and the ordinary developer doesn't have the skills for that - like the ordinary engineer, who doesn't develop mathematical models, instead, make use of them. Maybe when FM become usable by the ordinary developer we will be able to (really) do Software Engineering.

Therefore, we must, at least, be the closest to FM we can. Now, tell me what's more formal:
-Running code.
-Drawing models and using manual annotations to connect them to code*?

*(code generation is mostly limited to skeletons, someone will write algorithms anyway, therefore you will need to connect things by hand in the end).

You must agree with me that running code is much more formal than models and annotations, simply because the first is the thing of what software is done. So, please don't tell me that MDD is "more scientific than Agile because has the same principles of the other Engineering disciplines"... Running code means a functional machine, and that's the basic principle of... Engineering!

ps.: Like in the (hard) industry, Agilism automates repetitive tasks, such as testing and building, and this automation provides safer software verification. The other "end", Validation, is provided by BDD.

Wednesday, November 10, 2010

Effort Metrics Sophisma - Part IV

This a summarization and translation to English of a presentation prepared by me and an associated researcher in 2008. It was used to support our arguments against detailed one-year plans, a demand from the management of a big and interesting project we've been taking part since 2006.


So, what's the point?
The point is that the starting point is wrong! When people from the software development area sought for references in planning in the industry they thought it was obvious to check the techniques used by the custom production industry - where each product represents a different production project, and therefore, design is done product by product - not from the repetitive production industry, after all, software development is custom production, instead of reproduction of the same design in large quantities.

However, the reality is not that simple. While custom production runs under a low uncertainty environment (given that the product design was done beforehand), repetitive production usually runs under uncertainty - demand uncertainty.

Demand uncertainty is what happens to software. I not saying uncertainty in quantity, but uncertainty in building effort. Therefore, although we are talking here about a custom product, the techniques that apply to its production planning activities are the ones related to repetitive production.

In my opinion, those related to Lean Thinking... I am going to talk about this in future posts, now let's go to our third and last logical chaining.

Third Logical Chaining
From the second chaining:
Knowledge based production AND intangible good → there is no physical metrics
Non physical metrics → inaccurate estimates
We have:
Inaccurate estimates  → high uncertainty in the production process
AND
Custom Production → low uncertainty in the production process is expected
Then:
Although software development is custom production, it is not reasonable to use industry's custom production planning methods because they expect low uncertainty in the production process, but what we have is high uncertainty !!!


Now that I have shown, with some formality, that the current effort metrics don't work for software, I feel more comfortable to go forward and explain how to use Agile and Lean Thinking methods for software development planning.

Friday, November 5, 2010

Effort Metrics Sophisma - Part III

This a summarization and translation to English of a presentation prepared by me and an associated researcher in 2008. It was used to support our arguments against detailed one-year plans, a demand from the management of a big and interesting project we've been taking part since 2006.


The (Tough) Software Reality
Unfortunately, software is an intangible good, produced by knowledge workers. An intangible product means that it is not possible to measure its physical features, such as weight or diameter. Production based on knowledge is also hard to measure, because it isn't repetitive: there isn't something like the industry's "medium operation leadtime". Moreover, software development is highly dependent of people*, with all their specificities, problems, way of thinking etc.


(*If MDD were really able to drastically reduce the need for programming, organizations would no longer be hiring lots of programmers ...)


In other words, uncertainty in software development comes from:
- Non-repetitive and creativity-based work.
- Constant changes in technology.
- Some tasks considered as simple behave as complex.
- Frequent changes in requirements.
- Necessity to learn about the domain area during the development
(the above arguments are also valid for defending iterative & incremental cycles...)

Second Logical Chaining
Software → intangible good, non-physical
Software → knowledge based production, knowledge is a non-physical resource
Knowledge based production AND intangible good → there is no physical metrics (by definition)
Non physical metrics → inaccurate estimates
From the first chaining:
Detailed planning needs → detailed estimates
However
Software → inaccurate estimates
Inaccurate estimates → non detailed estimates
Then
Non detailed estimates → Non detailed planning

In other words, it is not feasible to produce detailed planning for software, even when using waterfall cycles !!!!!

The central point for reaching this sad conclusion is
Non physical metrics → inaccurate estimates

I leave this hook here so that people can try to dismantle our arguments: if you prove that you can have accurate estimates without having physical metrics, the argument here is broken.

More Conclusions
Detailed planning works in the hard industry, however some people try to transplant it to the software development industry in carelessly ways. Many organizations insist in this broken model, bringing high costs to the control and (re)planning functions, and in the end the goal is not to deliver a good quality product ontime anymore, but to create a false notion of control and of showing that managers are able of "thinking ahead" (although this "ahead" is unknown in the real word...).

For business people, not being able to create a detailed plan for months ahead is something like signing up a "proof of incompetency", many prefer to create plans that need strict control and even so, their baselines change all the time. At this point, the tail starts to wag the dog...

Tuesday, November 2, 2010

Effort Metrics Sophisma - Part II

This a summarization and translation to English of a presentation prepared by me and an associated researcher in 2008. It was used to support our arguments against detailed one-year plans, a demand from the management of a big and interesting project we've been taking part since 2006.


Interruption


I will interrupt a bit this series because I have just bumped into an "old" article on software metrics, about which I would like to comment briefly: Jones, C. Software Metrics: Good, Bad and Missing, Computer, vol. 27, no. 9, pp. 98-100, Sept. 1994.
Its abstract says
"
The software industry is an embarrassment when it comes to measurement and metrics. Many software managers and practitioners, including tenured academics in software engineering and computer science, seem to know little or nothing about these topics. Many of the measurements found in the software literature are not used with enough precision to replicate the author's findings-a canon of scientific writing in other fields. Several of the most widely used software metrics have been proved unworkable, yet they continue to show up in books, encyclopedias, and refereed journals.
"

Although it was published 16 years ago, I think the problems listed on this paper are still valid, and the paragraph above manifests my opinion on the use of metrics for planning in software: they only work under certain conditions and may have worked for a given sample of projects, but may not work for my project. In my humble opinion, the current metrics don't present a sufficient level of determinism so that we can  use them seriously for planning.

As I said in this series' first post, if I produce a brick here in Brazil, it will present the same size and weight in China, for instance. If I use the correct Engineering units and calculations, someone will reproduce "my experiment" of using these bricks anywhere in the Globe*. But the same doesn't occur with software requirements and thus with source code. I can send the same requirements for different people and they will implement them using different languages, paradigms, algorithms...

*(before someone say that bricks done in Brazil can present different plastic behavior in other regions of the Globe, I say that Materials Science has already developed methods to deal with this, while Software "Engineering" is still far away from doing the same for software.)

It is amazing how people use those constants found in some metrics (such as the one found in Use Case Points, which still gives 47,000 hits in Google...) without thinking that they maybe be not applicable to their specific project!

Therefore, if "So long as these invalid metrics are used carelessly, there can be no true software engineering, only a kind of amateurish craft that uses rough approximations instead of precise measurement.", I prefer to use experience and short iterations for planning.

ps. 1: the cited paper defends the use of Function Points, which I think is not applicable to Iterative & Incremental cycles.

ps. 2: I will repost a previous remark on UCP: Tell me how can you use it in an (really) Iterative & Incremental project (remember that real I&I cycles only details Use Cases when needed, so how are you going to count the number of steps for every use case and then get a "precise" effort estimation?). In spite of this, many teachers still teach UCP and I&I cycles in the same package...

Sunday, October 31, 2010

Effort Metrics Sophisma - Part I

This a summarization and translation to English of a presentation prepared by me and an associated researcher in 2008. It was used to support our arguments against detailed one-year plans, a demand from the management of a big and interesting project we've been taking part since 2006.

The goal of this presentation was to show that detailed planning (months ahead) in software development either:
1) Force waterfall cycles.
2) Are inaccurate, leading to a lot of changes in the baselines, which in turn lead to high costs of control function.

Planning in the (hard) Industry

Traditionally, the industry follows waterfall cycles (in spite of Concurrent Engineering), as you can see, for instance, in Gera's lifecycle. In the other engineerings, product metrics are well established, in some cases for centuries. If you say that 10 meters of wall will use 500 bricks, it will use 500 bricks in Brazil, China, or Denmark. In other words, in "traditional" engineering, products are tangible - you can measure them, doesn't matter who made them or where you are. A brick is a brick, and that's it.

(before you say that someone can count lines of code, even this metric is not exact, since I can write the same code using more or less lines, or even write different algorithms with different numbers of lines, and even though, they do the same thing - so please forget that LOC can be exact or tangible)


First Logical Chaining

Detailed planning needs → detailed estimates
Detailed estimates needs → detailed product design (for defining activities in detail)
Then
Detailed planning needs → detailed product's design
So we have
Detailed product's design (in the beginning of the project) → waterfall
Then
Detailed planning → waterfall!

First Conclusions

The arguments could stop here since it would fall into a discussion of whether or not using a waterfall cycle, which is already known to be inefficient in software development. 

Still, it could be suggested that in some cases it is necessary to make detailed product design in the beginning, for example, for Government contracts. In the next post I will show that even in that cases - when Law forces Big Design Up Front (BDUF) - the planning won't be accurate.

Thursday, October 28, 2010

Free/Open Source ERP (FOS-ERP) - Part VI

In 2007 I wrote a book chapter on the differences between FOS-ERP and proprietary ERP (P-ERP), published as part of the Handbook on Research in Enterprise Systems in 2008. This post is part of a series that revisits this paper. Please refer to the first post of this series to better understand the structure used in this comparison. 

Opportunities and Challenges
FOS-ERP offer a series of opportunities for actors that are currently out of (or ill inserted) the ERP market. Since there is nothing such as free lunch, these opportunities come together with a series of challenges, as listed below.

For smaller consulting firms:
a) Opportunities:
P-ERP vendors generally impose rigid procedures, associated to high costs, for firms that want to enter their partner network, raising the difficulties for smaller enterprises to become players in this market. In contrast, smaller consulting firms can enter the FOS-ERP market in an incremental way, increasing their commitment to a project as new business opportunities appear and bring more financial income. In other words, firms can start contributing with small improvements to the project as a way of gaining knowledge on the system platform and framework, and, as customers appear, more money can be invested on a growing commitment to the project.
b) Challenges:
If on one hand it is easier to enter the market, on the other it is harder to retain clients: a broader consultancy basis empowers the demand side, making customers more demanding and reducing profit margins.
Keeping the quality level among a heterogeneous network of consulting services providers is also a major challenge. FOS-ERP in general lack certification and quality assurance programs that guarantee service levels to clients. However, exactly those programs keep smaller consulting firms way from P-ERP, pushing them towards FOS-ERP. For a small consulting firm, a possible solution to this deadlock is to start with smaller, less demanding projects, and then go towards bigger ones, as its deployment processes and related activities gain maturity. This maturity will become the competitive advantage of the firm on a high competitive FOS-ERP market.

For smaller adopters:
a) Opportunities:
lower costs open new opportunities for Small and Medium Enterprises (SME) to become ERP adopters. With globalization, small firms suffer more and more with competition, and when they try to modernize their processes, they hit the wall of global players’ high costs, or have to adopt smaller off-the-shelf (and also proprietary) solutions that ties them to a single supplier that usually doesn’t have a partner network. In contrast, FOS-ERP are less expensive and support can be found in different ways, including individuals and small consulting firms.
This is also truth for local governments and countries in development in general, by reducing costs and technological dependency from global players. In fact, FOSS in general is an opportunity for countries in development to shift from buyers to players in the software industry.
b) Challenges:
lower costs can also mean that adopters have to deal with lower service levels, then stressing the necessity of carefully evaluating FOS-ERP options and the maturity of their supportive services. Actually, as said before, consulting certification is yet on the early stages for FOS-ERP, thus quality of service must be carefully addressed during contract negotiation.

For researchers:
a) Opportunities:
I've been contributing to ERP5 since its conception. During this time it was possible to observe, and sometimes take part, of all the process that compose an ERP solution, from conception and development, to business models, deployment, operation and maintenance, and evolution. This is a really good opportunity, since most research papers on ERP are related to deployment and operation, given that P-ERP companies don’t usually open their solution's internals for researchers. Smaller research groups can find their way in this area by getting associated to a FOS-ERP project, and contributing to specific parts of it.
b) Challenges:
If on one hand the openness of FOS-ERP may give researchers more information on their internal features and development processes, on the other hand it is harder to get information from a distributed set of partners that sometimes carry informal relationships. Social and economical aspects, like reward structures, must be taken into account to understand the dynamics of FOS-ERP, like in every FOSS, bringing more components to be analyzed.

For individuals
a) Opportunities:
FOS-ERP represent an unique opportunity for an individual to install an ERP framework and understand its internals. It is the chance of participating in a big software development project without being an employee of a big company. Also, the developer can incrementally gain knowledge of the system, and get free support from the community, without the necessary investment on the P-ERP expensive training and certification programs. In the future, these advantages can make more free-lance developers enter FOS-ERP communities, currently formed mostly by consulting companies employees.
b) Challenges:
learning the internals of a FOSS in general means to spend considerable time in understanding system architecture, design decisions, and specific features. Moreover, FOS-ERP still lack courseware in general*** to help accelerating the learning process, and many times the individual must count on Web sites, mailing lists, discussion forums, and the good will of community members to acquire deeper knowledge on the framework.

***When I originally wrote this in 2007, ERP5's OSOE Project didn't exist yet. My opinion on ERP5 can be biased, but the truth is that Nexedi agreed on this point and decided to start a web based training program to fill this gap.

Free/Open Source ERP (FOS-ERP) - Part V

In 2007 I wrote a book chapter on the differences between FOS-ERP and proprietary ERP (P-ERP), published as part of the Handbook on Research in Enterprise Systems in 2008. This post is part of a series that revisits this paper. Please refer to the first post of this series to better understand the structure used in this comparison. 

Operation
During operation, environmental changes will lead to corresponding changes in the system. These changes can be conducted by the original vendor, other service providers, or even the adopter with help of the community. Operation costs are reduced by other technologies of which FOS-ERP usually rely on, such as Operating Systems, Office Suites, Middleware etc.  

A conclusive remark is that, although experience has shown that most of the times the adopter will not be active on tasks that involve coding, FOS-ERP is still a good choice, since it reduces vendor dependency. Moreover, the openness of code delivers much more opportunities for creating competitive differential by implementing innovative processes or algorithms and integrating to other solutions.

Until this point I talked about the differences between FOS-ERP and P-ERP on the adopter side. For understanding the differences on the vendor side, please refer to the original article. In the next post I will talk about the opportunities and challenges brought by FOS-ERP.

Wednesday, October 20, 2010

Free/Open Source ERP (FOS-ERP) - Part IV

In 2007 I wrote a book chapter on the differences between FOS-ERP and proprietary ERP (P-ERP), published as part of the Handbook on Research in Enterprise Systems in 2008. This post is part of a series that revisits this paper. Please refer to the first post of this series to better understand the structure used in this comparison.

Detailed Design and Implementation

The detailed design phase focuses on refining models and parametrization. The implementation phase concentrates on programming/configuring, validating, integrating and releasing modules for initial use. Remember that if you are using an interactive and incremental process, you will do this module by module, or even by implementing intermediary versions of modules: modules can have partial implementations that will be refined by operation, in other words, we can go back to refine a module in a future iteration. The Figure 1 revisits Gera's life cycle by changing it from a Waterfall to an Iterative & Incremental perspective.



Figure 1: Iterative & Incremental Gera's Lifecycle

In Figure 1, the outer loop refers to the iterations, while the internal loop refers to the possibility of releasing incremental versions of the modules.

If the adopter opts to participate actively in the selected project, deeper design and implementation decisions are involved, as well as the necessity of investing more human and financial resources for understanding the FOS-ERP framework, developing and maintaining parts of it, and managing the relationship with the project's community.

If a FOS-ERP vendor is involved, customization and maintenance contracts must define the responsibilities of each part on the deployment process. For instance, what the vendor should do if it finds a bug injected by the customer? What is the priority, for the vendor, for correcting this bug? Actually, is the vendor  responsible for correcting this bug, since for this part of the system the adopter decided to take advantage of the solution’s free license, therefore exempting the vendor of responsibility for the bug?

Furthermore, the adopter has the option of assuming different grades of involvement for each module. For ordinary modules, like payroll, the adopter can let the vendor do the work. However, for strategic modules, for which the adopter believes that holds competitive advantage by following its own business processes, the adopter can take an active role from detailed design to implementation and maintenance, to assure that the business knowledge, or at least the more precious details that keep the competitive advantage, will be kept in the adopter company. In that situation the vendor is limited to act as a kind of advisor to the adopter.

A very interesting point is the openness of parts customized for and sponsored by a specific adopter. Maybe the adopter doesn’t want to become a developer at all – which is very likely to happen, but it still wants to keep some tailored parts of the system in secrecy. In these cases, the ERP's licensing terms must be adapted, so that general openness of the code is guaranteed, while some client-sponsored customized parts can be kept closed.

Although this last point can seem to be nonsense in FOSS terms, it is a common real-life situation in FOS-ERP. In fact, I know a case that an adopter company sponsored the whole development of an FOS-ERP during a three-year period, without becoming a prosumer and keeping only a specific algorithm in secret. The original license had to be changed to fit this customer demand.

Thursday, October 14, 2010

Free/Open Source ERP (FOS-ERP) - Part III

Free/Open Source ERP (FOS-ERP), Process, ProductIn 2007 I wrote a book chapter on the differences between FOS-ERP and proprietary ERP (P-ERP), published as part of the Handbook on Research in Enterprise Systems in 2008. This post is part of a series that revisits this paper. Please refer to the first post of this series to better understand the structure used in this comparison.

Requirements and Preliminary Design


Given that most software development (and customization) today is done (or should be done) through interactive and incremental life cycles, the requirements, preliminary design, detailed design, and implementation phases are performed in a loop.

Following its list of priority requirements, the adopter can model its main business processes – as part of the Preliminary Design – in order to check how the different ERP systems fit to them. At this point, FOS-ERP need  to be evaluated using the criteria traditionally used to evaluate ERPs in general, as well as criteria related specifically to Free/Open Source Software (FOSS) in general, such as maturity of the community and the levels of support.

An interesting point regarding FOS-ERP is that, although it can produce a smaller financial impact, it may bring a bigger knowledge and innovation impact. The access to the source code in FOS-ERP can drive to a much better exploration of the ERP’s capabilities, thus allowing a better implementation of differentiated solutions. Of course, software development resources must be available to reach this, what means that for smaller organizations this is usually not possible.

From this standpoint, the strategic positioning of an adopter in relation to a FOS-ERP seems to be of greatest importance, given the possibility of deriving competitive advantage from the source code. Therefore, the adopter must decide to behave as a simple consumer, only obtaining the solution from a vendor or the community, or become a prosumer, by mixing passively acquiring commodity parts of the system with actively developing strategic ones by itself. Thus it is clear that when an adopter considers FOS-ERP as an alternative, it should also consider developing parts of the system to fit its requirements – taking into account that this kind of positioning represents allocating managerial and technical resources for development tasks in a FOSS environment.

Sunday, October 10, 2010

Free/Open Source ERP (FOS-ERP) - Part II

In 2007 I wrote a book chapter on the differences between FOS-ERP and proprietary ERP (P-ERP), published as part of the Handbook on Research in Enterprise Systems in 2008. This post is part of a series that revisits this paper. Please refer to the first post of this series to better understand the structure used in this comparison.


Concept
During this phase, high-level objectives are established, such as the acquisition strategy, preliminary time and cost baselines, as well as the expected impact of the ERP adoption in the organization. If besides customization, development is necessary, in the case of FOS-ERP, the level of involvement of the adopter in this development can be established. In other words, even from this initial point, the adopter can start considering the possibility of contributing to a FOS-ERP project, becoming a prosumer - a mixture of consumer and producer of the solution. The final decision on becoming or not a prosumer will be possible only during the following phases, when the adopter better understands the solution requirements and the solution alternatives.

Small and Micro enterprises hardly will become prosumers because of the lack of IT personal necessary to develop - I will talk about this latter.

Free/Open Source ERP (FOS-ERP) - Part I

In 2007 I wrote a book chapter on the differences between FOS-ERP and proprietary ERP (P-ERP), published as part of the Handbook on Research in Enterprise Systems in 2008. I will start now a series of posts that will revisit this paper.

This first part will quickly introduce the Generalized Enterprise Reference Architecture and Methodology (GERAM) as a framework for comparing FOS and P-ERP.

GERAM defines seven life-cycle phases for any enterprise entity and can be used as a template life cycle to analyze FOS-ERP selection, deployment, and evolution. These phases, represented on Figure 1, can be summarized as follows:
a)Identification: identifies the particular enterprise entity in terms of its domain and environment.
b)Concept: conceptualizes an entity’s mission, vision, values, strategies, and objectives.
c)Requirements: comprise a set of human, process, and technology oriented aspects and activities needed to describe the operational requirements of the enterprise.
d)Design: models the enterprise entity and helps to understand the system functionalities.
e)Implementation: the design is transformed into real components. After tested and approved the system is released into operation.
f)Operation: is the actual use of the system, and includes user feedback that can drive to a new entity life cycle.
g)Decommission: represents the disposal of parts of the whole entity, after its successful use.
 
 
Figure 1. GERAM Life Cycle Phases


Except for decommission and identification, which are not (directly) influenced by licensing models, these phases can be used to better understand how FOS-ERP differs from P-ERP, as next posts will explain.

Saturday, October 9, 2010

Proof of Concept for the BLDD Tool - Part V

Last week I bumped into what I think is one of the firsts academic papers on BDD to be published in a scientific journal. We have published three technical reports in arxiv.org, but they weren't peer reviewed: Filling the Gap between Business Process Modeling and Behavior Driven Development, Mapping Business Process Modeling constructs to Behavior Driven Development Ubiquitous Language and  A Tool Stack for BDD in Python.

This paper is on applying Model Driven Development (MDD) techniques to BDD, following the principles presented in the Agile for MDD white paper. The authors use Foundational UML (fUML), which defines a basic virtual machine for UML, to create bUML, a tool to support BDD activities that automatically updates the project status.

Our approach holds some differences in relation to theirs:
1) Although the examples in the paper are too "recursive" (given-when-then is used to describe the creation of stories and scenarios) to check details of their proposal, it is obvious that they follow MDD, in fact, at some point they say "The generated code intends to be complete, with no placeholders for the developer to fill out." We do the opposite: we leave placeholders to be filled out.
However, I still need to see some more practical example to affirm this.
2) Our proposed tool is meant to run step by step, I saw no reference to this kind of feature in bUML's description.
3) They use OCL as the basis for textual modeling, we propose to use a more natural should language. We envision the use of (some) should-dsl to insert and describe conditions (logical, acceptance) for the tests. OCL is a standard, however, should-dsl is closer to the user's language.

I need some more time to check bUML - I couldn't find the software or real world examples.  What I can say now is that they follow MDD, we follow Agile, so our proposals differ in the way we intend to implement and use the tools.

Friday, October 1, 2010

BLDD, BPEL, and BPMN - a first opinion

My good colleague Charles Moller asked me about in what BLDD would differ from BPEL and BPMN.

I will start by the easier part: BPMN. According to Wikipedia (from where I copied all phrases between quotes), it is "a standard for business process modeling, and provides a graphical notation for specifying business processes in a Business Process Diagram (BPD)." Since BLDD's proposal is to use any BPM notation to write specifications (possibly complemented by a textual language), BPMN is an option to BLDD.

Going further, "The primary goal of BPMN is to provide a standard notation that is readily understandable by all business stakeholders. These business stakeholders include the business analysts who create and refine the processes, the technical developers responsible for implementing the processes, and the business managers who monitor and manage the processes. Consequently, BPMN is intended to serve as common language to bridge the communication gap that frequently occurs between business process design and implementation."
In other words, as expected, BPMN can be used as an Ubiquitous Language (or a Shared Language, as I dare to call this concept).

Moreover, BPMN also provides a mapping between the graphics of the notation to the underlying constructs of execution languages, particularly Business Process Execution Language (BPEL). BPEL is an orchestration language, in other words, it is used to specify "an executable process that involves message exchanges with other systems, such that the message exchange sequences are controlled by the orchestration designer."  Now, it is interesting to read about the relation between BPMN and BPEL:

"
Relationship of BPEL to BPMN
There is no standard graphical notation for BPEL, as the OASIS technical committee decided this was out of scope. Some vendors have invented their own notations.
(...)
Others have proposed to use a substantially different business process modeling language, namely Business Process Modeling Notation (BPMN), as a graphical front-end to capture BPEL process descriptions.
(...)
However, the development of [tools that map from BPMN to BPEL] has exposed fundamental differences between BPMN and BPEL, which make it very difficult, and in some cases impossible, to generate human-readable BPEL code from BPMN models. Even more difficult is the problem of BPMN-to-BPEL round-trip engineering: generating BPEL code from BPMN diagrams and maintaining the original BPMN model and the generated BPEL code synchronized, in the sense that any modification to one is propagated to the other.
"
This final paragraph gave me some hope that BLDD is not an in vain proposal!

BLDD is based and inspired in BDD, which in turn is influenced by Domain Driven Design (DDD). In other words, for all reasons exposed by these techniques, having all the artifacts (models, requirements, source code, tests, documentation...) readable by humans is a must have feature. Also, round-trip is totally necessary. Besides following these principles, BLDD is simpler and independent of any execution engine - although we are investigating it in environments with workflow engines. I am not saying that BPEL isn't good, I am sure that some really complex systems can take a lot of advantage of it, however, we want to have a more generic proposal, which can be implemented in any language.

I know of BPEL Script, however, it seems to be a language to describe processes, it think is not a full fledged programming language. An idea to investigate is the use of automated tests with BPELScripts and make it another proposal for BLDD (something like BPELScripts + TDD or even BDD). However, what we propose now with BLDD is to use generic languages, such as Python, Ruby, and Java, to implement all the stuff, and more, in a way readable by humans.

So, in other words, I think that BLDD and BPEL has the same goal of implementing systems based on business processes. However:
a) BPEL (i) needs a heavier machinery, (ii) already provides a series of artifacts for dealing with highly complex systems, and (iii) is well supported by industry.
b) BLDD (i) is simpler in general, (ii) is heavily based in TDD, with all its advantages, and (iii) it works with human-readable artifacts from end to end.

Which one to use in your case? I think it depends on your demands, technical knowledge, and development culture. I don't have a final answer, mainly because we need to advance on BLDD proposal and also because I prefer to use the Keep it as Simple as Possible (KISP) principle. I think there is room for BPELScript + BLDD, however, more time is needed for analyzing this proposal (any volunteers?).

Anyway, after reading this quite interesting two-year old article (some of the problems expressed in it maybe are solved right now), I have a final remark: sometimes "enterprise class solutions" become so complex that programmers abandon them in the end, such as CORBA (and J2EE at some extent).

Thursday, September 30, 2010

On Certifications and Tribal Culture - Part III

In the two previous posts of this series I told a case and a tale. Although the first one is real and the second is a fantasy, they hold a similarity: both are stories about how people loose the focus on the product they should deliver and start worrying more about the execution of a given process, which may have worked in the past or in some specific situations, but is not necessarily good right now. In both cases we could see a process that was formalized, is "scientific", and reached its goals - however, with a lot of resource waste.

So what's this all about?

It is about instead of trying to solve problems, you must try to avoid they happen. This is quite obvious, however, many, many people waste resources by solving problems that could be avoided, because they are stuck to assumptions that are no longer valid, to techniques that are obsolete, and mainly, because they have difficulties in promoting changes to their process, given political and cultural resistances.

Is is also about keeping the focus on the product, I mean, to subordinate the process to the product, not the opposite. In the isolated tribe tale, do you remember when one of the specialists says "He has no experience in burning forests"? What's your focus, burning forest or having roasted pork? Again the answer is obvious, however many, many companies are proudly burning forests around the Globe to have roasted pork...

Why keeping a document representing a Requirements Traceability Matrix if you can automatically and safely connect requirements to code? Because I know how to do a traceability matrix, because I don't know automate tests, because it is so hard to change our process... Sorry, if you used one of these answers, you are burning forests! You are now solving problems caused by your own solutions...

Ok, but what certification has to do with all that? I believe that, in the end, certification is about following a given process, making people loose the focus on the final product. Unfortunately, all certified teams that I know use expensive and slow processes - this doesn't mean they are bad teams, they are slow. And I also know some very good teams without a single certification. Thus, certification doesn't prove that your team is good in the medium term. The only proof of quality is a list of satisfied customers...

You must review your process all the time, and make it flexible enough to be changed in relatively cheap ways. Check which activities aggregate value to the product and make you reach the client's goals faster, safer and cheaper. If an activity doesn't aggregate value to the user and isn't enforced by Law, Environmental questions or Moral, it is waste. If it is vital to your process, maybe your process is wrong. It may have worked in the past, but it needs to be reviewed.

Every activity that is performed to correct errors is waste in general. On the other hand, avoiding errors is to promote process improvement. Yes, welcome to  Lean Thinking!

Keep your eyes on product quality and costs, and in process leadtime and responsiveness to the demand. Make everything run, even documentation - and requirements in special, because your users won't interact with traceability matrices and stuff like that, even indirectly.

Ah, and one more thing: try to automate the repetitive tasks, not the creative ones! Software is about knowledge!

Sunday, September 26, 2010

On Certifications and Tribal Culture - Part II

Now it is time to tell a tale, which I heard from an associated who in turn read it from somewhere I can't remember, so sorry for not referencing the original and for changing its last 20%.

Once upon a time an isolated tribe that lived in an island and loved pork, although they had it raw, since they were a bit primitive regarding the cooking process in general. One day, during a storm, a lightning struck a tree and started a fire in the forest where the pigs lived. After the fire, they found a lot of burned pigs. Since they were hungry, they decided to have the pigs anyway, and realized how good was roasted pork!

Realizing that the next storm would take too long to come, they decided to reproduce the roasted pork process, by setting fire to the forest. Their first tentative wasn't successful, because the wind was blowing in another direction. The chief, with all his wisdom, decided to appoint a very intelligent man to study the winds and determine when the wind would change to the necessary direction. After months of careful observation and data analysis, this man determined the best moment to set fire. And he was right!

All adults in the tribe stopped what they were doing and ran to the forest to set fire to it. They got some pigs, however in a smaller quantity than expected. They needed to improve their productivity in fire setting! The chief than appointed a commission to discover what was happening and they, after doing a lot of data analysis and long meetings, realized that grass was easier to set fire than trees, and, even better, grew faster!

The next step was to put the trees down and let the grass grow. However, putting trees down is a hard task, therefore, the chief decided to appoint a team for doing that, and, of course, a sub-chief to coordinate the team. After putting a lot of tress down, they realized that the grass didn't grew as expected. Again, the chief then appointed some specialist who created a system for wetting the grass, and, of course, allocated people to his team.

Now their process was really beautiful and well organized: they had a council in forest firing (not in pork roasting, but firing the forest, which in turn would give them roasted pork!), specialists in wind direction, and specialists, workers, and managers for putting trees down, moving the logs, wetting the grass etc, etc!

One day a castaway arrived at the island. The chief, with all his wisdom, realized that the man was good and invited him to check his marvelous process on obtaining roasted pork, with many specialized teams and a nice management, based on scientific methods. Aiming to show a different way of doing things, the man asked their best hunter to accompany him, and although the hunter was a bit out of shape (he was now working on log transportation - a vital activity to the process!), he followed the man to the jungle. After some time, they arrived with a dead pig. The man chopped some log, set fire to it, and in one hour they had roasted pork!

The man told the chief that they didn't need to have all that people and all that resource expenditure involved to have roasted pork!

Suspicious, the chief summoned the council, formed by a set of specialists in fire setting, log transportation and such, to ask what they think. They presented a lot of good reasons to not trust the man:
-He has no experience in burning forests!
-It works for one pig, I would like to see this working with a lot of pigs!
-We don't have enough skilled hunters! (they were doing other stuff right now)
-Our process is mature and works, in the end we do have roasted pigs!
-And our scientific methods, they are not lying!
-Why going after pigs when we can set fire to the jungle and kill them without running?
-Can you imagine a bunch of hunters running in the middle of the forest without us to guide them?

The chief then decided to put that crazy man in a canoe and send him back to where he came. Peace was back to the tribe, and all people were guaranteed in their roles, in special, the many managers and specialists.

But a serious problem was happening... since they devastated their environment, the roasted pork production wasn't supplying the demand, because pigs were not reproducing in the same pace anymore. Moreover, with all men involved in the fire setting process, huts were going down even with small rains.
Many adults were dedicated to the process and couldn't do other stuff, so their hut were destroyed by a storm.

When elders, women and children started to complain, the "process people" said: "can't you understand? that's the best way of having roasted pork, we evolved this process during years!"

However, after some time, finally the pigs disappeared, and even the "process teams" started to starve - and sleep outdoors. The chief then summoned the council and they said:
-Fire setting is like this, we cannot change its nature. Now we have to move to another island!

The chief, happy for having such a team of specialists at his side, appointed a new council, formed by the same people - after all, they had shown their great management skills in making such a complex process (the fire setting) work!

But a serious problem was happening... since they devastated their environment, the roasted pork production wasn't supplying the demand, because pigs were not reproducing in the same pace anymore. Moreover, with all men involved in the fire setting process, huts were going down even with small rains.

Many adults were dedicated to the process and couldn't do other stuff, so their hut were destroyed by a storm.

When elders, women and children started to complain, the "process people" said: "can't you understand? that's the best way of having roasted pork, we evolved this process during years!"

However, after some time, finally the pigs disappeared, and even the "process teams" started to starve - and sleep outdoors. The chief then summoned the council and they said:
-Fire setting is like this, we cannot change its nature. Now we have to move to another island!

The chief, happy for having such a team of specialists at his side, appointed a new council, formed by the same people - after all, they had shown their great management skills in making such a complex process (the fire setting) work!


+++++++++++++++++++++++++++++++++++++


What's the moral behind this tale?


The moral is that after some time, you loose the focus on the user demands and in the value chain. You start to follow the process, even when it is becoming expensive, complicated, and slow, mesmerized by its apparent perfection.

I call it the "The Beauty of the Beast Phenomenon in Project Management".

After lapidating your process you think it is beautiful and perfect. The problem is not in trying to improve your process, the problem is (i) not identifying waste
and (ii) not mapping activities to the value chain - things that aggregate value to the user.

A process that works doesn't mean it is a good process. IT cannot be a burden to Production, doesn't matter the certification. As Goldratt would say, the Goal is to make money - not getting certified... No problem if you think that certification is in the path for making money, but beware the beauty of slow and heavy processes.

Saturday, September 25, 2010

On Certifications and Tribal Culture - Part I

This post talks a bit about process and certifications, as this previous one did.

Some time ago two colleagues were faced with a curious situation. After developing and testing in semi-critical environments, they finally would put in operation an industrial decision support system in a critical environment.

The first funny thing was the meeting with the company's IT Department: two developers, one user, and... six persons from IT! A DBA, a network manager, an information security specialist, a "requirements specialist" - who had absolutely no idea of the application domain, and two i-have-no-idea-what-they-were-doing-there specialists in something-that-doesn't-aggregate-any-value-to-the-client-but-is-vital-to-IT-department.

The three first guys quickly got to the point, took their notes, supplied a nice and short schedule and were eager to leave the meeting and do their stuff. The other three started to do very basic questions (one of them did the same questions done by herself in two previous meetings) and in the end provided a three month schedule to "certify the solution". The second funny detail was that both the hardware and development tools were bought by them - and they took almost one year to do this. Thus, the certification of the application, would take one month, while the re-certification of the platform would take three months.

When our user asked "why all that?", they said "it's our process."

Now I go straight to the end: while the first three took one month to do the really necessary stuff, the "process guys" took SIX months to do their job. The fact was that any, I say, any, problem that they could have found (and they didn't) wouldn't cause more (or even 10% of) damage that a six month delay caused in terms of production throughput problems and related costs. However, of course, they had a certified process to follow.

This IT Department is Cobit'ed, ITIL'ed, PMP'ed, and even Function Point'ed.

However, their users hate them.

Almost every single time we met our users, they complained about how IT was slow and bureaucratic, and worse than that, how it caused delays in production and consequently financial loss. And every time users complained, IT repeated their mantra "it is the process, if we don't follow it, something wrong can happen." Even when our user tried to assume all the risks, they again repeated their other mantra "the process doesn't allow, only higher level decision makers can change this."

One can see two weird things in this story: first, doesn't matter if you are causing problems to the production department, you must follow the rules of the supportive department. Second, if changes to the process are needed, they are going to take long to happen.

Have you ever heard of the tail wagging the dog?

I know that all that certifications promise alignment of business and IT. And I DO believe that in most cases when a company get them, it really has alignment between business and IT.

However, certifications have three problems: i) they are based on documents, and documents are not software or hardware. Or, in other words, users don't use documents. In special those filled by IT! ii) They represent a snapshot of the environment, and this environment changes, however iii) how to be responsive to changes if the process imposes a lot of bureaucracy to implement them?

What a company need to stay competitive is to be responsive to changes. The only way of being bureaucratic and responsive at the same time is having a lot of people to manage change. The problem is that a lot of people costs... a lot! Therefore, bureaucratic processes are slow or expensive. Sorry for saying that, but in real world, things are even worse: these process become slow AND expensive. Of course, neither certifiers, nor certified will tell you this. Just ask users if they are satisfied.

So, what's the solution? Continuous improvement. I would exaggerate: Quick Continuous Improvement. If your process, certified or not, cannot provide this, change it. Quickly.


And the tribal culture part of this post? I will tell a tale in my next post.

Tuesday, September 21, 2010

Proof of Concept for the BLDD Tool - Part IV

One way of making a tool prove its utility is using it in scenarios where things go wrong. Figure 1 shows what happens if the code implemented is not in compliance with the requirements. The user will see a message saying that the scenario didn't pass.

Figure 1: Error generated by code not compliant to requirements

In the message in Figure 1, the error was generated because the last step was not implemented correctly: the message "9 sales awaiting to be sent" was not generated by the system, therefore, the developer must check the code and correct it. In that way it is possible to immediately identify where the system must be improved to work as expected, which is very useful when it is still in development, or when changes are needed.

Imagine that the "9 sales awaiting to be sent" message represents an improvement to this business process. I insert it on my specifications and run the simulation, so that I can identify exactly the point where the system must be modified to become compliant to the requirements. This is exactly what BDD proposes, the difference here is that we are joining a graphical representation to it and giving the user the option of running the business process step by step and see how the system is behaving.

In other words, the user can check a live process and the live system that answers to this given process. If anything goes wrong, the tool will identify what and where, making corrections easier.

It is important to note that this is a proof of concept, my idea was to launch it quickly so that more people can discuss and contribute to both the method and the tool. Besides Woped, we used Cucumber, which in turn uses BDD's Ubiquitous Language (Given-When-Then), however, as I said in this thread's previous post, we may define an UL for every BP representation and make the underlying text mirror the representation. That's our next step.