Skip to main content

The headaches of centralised data

The UK government's woes concerning the potential loss of personal data has received a lot of publicity in recent months. It calls into question the whole strategy adopted by governments to build ever larger centralised databases containing more and more integrated data from multiple applications such as passports, social security, driving licences and health service. An insurmountable security nightmare arises from developing such databases. Just who can you trust absolutely to control access to such universal data sources?

Here's a radical thought - instead of centralising the data and building multiple applications around the same centralised database, why not build applications that can safely access multiple data sources, allowing departmental control of the more localised data sources with full version control and audit of all data access. Interestingly when we built xProcess, the data architecture we adopted lends itself to just such an approach. We've since named the framework XAPPA (eXtensible APPlication Architecture) and its continued development promises much in terms of rapid, model-driven application development for distributed systems.

Key benefits include:
  • Lets multiple “centres” manage the data they own and freely link to data owned by other centres.
  • Each centre manages access and audit
  • All data versioned
  • All access auditable
  • Every change recorded
  • Uses industry standard protocols for versioning
  • Uses XML files (optionally encrypted)
  • Transfer objects support Web2 applications and tools
  • Applications built with MDA/DSM
  • Applications can share data sources.
Frameworks like XAPPA may yet provide the means for both the integration and distributed management of data that future personal data systems will need. While all data storage and retrieval systems suffer from the risk of penetration, distributed systems at least limit the risk of total data penetration, which is the doomsday scenario few seem to realise is only too likely with conventional database approaches.
Post a Comment

Popular posts from this blog

Does your Definition of Done allow known defects?

Is it just me or do you also find it odd that some teams have clauses like this in their definition of done (DoD)?
... the Story will contain defects of level 3 severity or less only ... Of course they don't mean you have to put minor bugs in your code - that really would be mad - but it does mean you can sign the Story off as "Done"if the bugs you discover in it are only minor (like spelling mistakes, graphical misalignment, faults with easy workarounds, etc.). I saw DoDs like this some time ago and was seriously puzzled by the madness of it. I was reminded of it again at a meet-up discussion recently - it's clearly a practice that's not uncommon.

Let's look at the consequences of this policy. 

Potentially for every User Story that is signed off as "Done" there could be several additional Defect Stories (of low priority) that will be created. It's possible that finishing a Story (with no additional user requirements) will result in an increase in…

"Plan of Intent" and "Plan of Record"

Ron Lichty is well known in the Software Engineering community on the West Coast as a practitioner, as a seasoned project manager of many successful ventures and in a number of SIGs and conferences in which he is active. In spite of knowing Ron by correspondence over a long period of time it was only at JavaOne this year that we finally got together and I'm very glad we did.

Ron wrote to me after our meeting:

I told a number of people later at JavaOne, and even later that evening at the Software Engineering Management SIG, about xProcess. It really looks good. A question came up: It's a common technique in large organizations to keep a "Plan of Intent" and a "Plan of Record" - to have two project plans, one for the business partners and boss, one you actually execute to. Any support for that in xProcess?

Good question! Here's my reply...

There is support in xProcess for an arbitrary number of target levels through what we call (in the process definitions) P…

Understanding Cost of Delay and its Use in Kanban

Cost of Delay (CoD) is a vital concept to understand in product development. It should be a guide to the ordering of work items, even if - as is often the case - estimating it quantitatively may be difficult or even impossible. Analysing Cost of Delay (even if done qualitatively) is important because it focuses on the business value of work items and how that value changes over time. An understanding of Cost of Delay is essential if you want to maximise the flow of value to your customers.

Don Reinertsen in his book Flow [1] has shown that, if you want to deliver the maximum business value with a given size team, you give the highest priority, not to the most valuable work items in your "pool of ideas," not even to the most urgent items (those whose business value decays at the fastest rate), nor to your smallest items. Rather you should prioritise those items with the highest value of urgency (or CoD) divided by the time taken to implement them. Reinertsen called this appro…