This is NOT a Java specific question (but I'm implementing the solution in Java).
I have always been a firm believer that in most cases, once you define and organize the data to best express the problem being solved (within reasonable system constraints), then the algorithm pretty much solves itself, becoming obvious from the data arrangement. This has served me in solving [computer related] problems of literally all kinds.
I'm now half-way through the design stage of a major feature for the system I'm working on. This feature adds generation of
on-change Events for an extensive class-tree of persistent data-objects. My boss and I, proposed two solutions.
- Process based: involving extensive [somewhat messy] changes across the entire large data-model class-tree, that don't affect the data, only the code. The result being hardwired into the code-base.
- Data based: Extensive [but transparent] changes to the data-model, with very specific and local changes/additions to the code, in an entirely generic solution. The result being pretty much independent of the existing code-base.
Guess which solution the bosses picked? The non-generic
double-bagger ugly and messy one - [mostly] because "
it doesn't dirty the data-model". Never mind that I feel this added "dirt" is a valuable feature in itself, that will be most useful later. I can't describe the specific feature in a relevant level of detail (10 pages) anyway.
I think that "dirty-data" is almost always better and easier to understand and maintain than "dirty code". Isn't this pretty much a base assumption of OOD? What do you think??